Test Report: KVM_Linux_crio 20316

                    
                      afc1769d7af9cf0fbffe1101eacbcd6e5c84f215:2025-01-27:38084
                    
                

Test fail (12/312)

x
+
TestAddons/parallel/Ingress (155.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-903003 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-903003 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-903003 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [67533d9e-df51-412f-a10f-c3983796b129] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [67533d9e-df51-412f-a10f-c3983796b129] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 14.003918434s
I0127 01:52:34.621084  904889 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-903003 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-903003 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.925239982s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-903003 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-903003 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.61
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-903003 -n addons-903003
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-903003 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-903003 logs -n 25: (1.228430683s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-930762                                                                     | download-only-930762 | jenkins | v1.35.0 | 27 Jan 25 01:48 UTC | 27 Jan 25 01:48 UTC |
	| delete  | -p download-only-342001                                                                     | download-only-342001 | jenkins | v1.35.0 | 27 Jan 25 01:48 UTC | 27 Jan 25 01:48 UTC |
	| delete  | -p download-only-930762                                                                     | download-only-930762 | jenkins | v1.35.0 | 27 Jan 25 01:48 UTC | 27 Jan 25 01:48 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-522891 | jenkins | v1.35.0 | 27 Jan 25 01:48 UTC |                     |
	|         | binary-mirror-522891                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:41687                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-522891                                                                     | binary-mirror-522891 | jenkins | v1.35.0 | 27 Jan 25 01:48 UTC | 27 Jan 25 01:48 UTC |
	| addons  | enable dashboard -p                                                                         | addons-903003        | jenkins | v1.35.0 | 27 Jan 25 01:48 UTC |                     |
	|         | addons-903003                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-903003        | jenkins | v1.35.0 | 27 Jan 25 01:48 UTC |                     |
	|         | addons-903003                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-903003 --wait=true                                                                | addons-903003        | jenkins | v1.35.0 | 27 Jan 25 01:48 UTC | 27 Jan 25 01:51 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-903003 addons disable                                                                | addons-903003        | jenkins | v1.35.0 | 27 Jan 25 01:51 UTC | 27 Jan 25 01:51 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-903003 addons disable                                                                | addons-903003        | jenkins | v1.35.0 | 27 Jan 25 01:51 UTC | 27 Jan 25 01:51 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-903003        | jenkins | v1.35.0 | 27 Jan 25 01:51 UTC | 27 Jan 25 01:51 UTC |
	|         | -p addons-903003                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-903003 addons                                                                        | addons-903003        | jenkins | v1.35.0 | 27 Jan 25 01:51 UTC | 27 Jan 25 01:51 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-903003 addons disable                                                                | addons-903003        | jenkins | v1.35.0 | 27 Jan 25 01:52 UTC | 27 Jan 25 01:52 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-903003 addons disable                                                                | addons-903003        | jenkins | v1.35.0 | 27 Jan 25 01:52 UTC | 27 Jan 25 01:52 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-903003 ip                                                                            | addons-903003        | jenkins | v1.35.0 | 27 Jan 25 01:52 UTC | 27 Jan 25 01:52 UTC |
	| addons  | addons-903003 addons disable                                                                | addons-903003        | jenkins | v1.35.0 | 27 Jan 25 01:52 UTC | 27 Jan 25 01:52 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-903003 addons                                                                        | addons-903003        | jenkins | v1.35.0 | 27 Jan 25 01:52 UTC | 27 Jan 25 01:52 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-903003 addons                                                                        | addons-903003        | jenkins | v1.35.0 | 27 Jan 25 01:52 UTC | 27 Jan 25 01:52 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-903003 addons                                                                        | addons-903003        | jenkins | v1.35.0 | 27 Jan 25 01:52 UTC | 27 Jan 25 01:52 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-903003 ssh cat                                                                       | addons-903003        | jenkins | v1.35.0 | 27 Jan 25 01:52 UTC | 27 Jan 25 01:52 UTC |
	|         | /opt/local-path-provisioner/pvc-04e58c29-5f8a-434e-a75a-c12322d29d11_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-903003 addons disable                                                                | addons-903003        | jenkins | v1.35.0 | 27 Jan 25 01:52 UTC | 27 Jan 25 01:53 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-903003 ssh curl -s                                                                   | addons-903003        | jenkins | v1.35.0 | 27 Jan 25 01:52 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-903003 addons                                                                        | addons-903003        | jenkins | v1.35.0 | 27 Jan 25 01:52 UTC | 27 Jan 25 01:52 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-903003 addons                                                                        | addons-903003        | jenkins | v1.35.0 | 27 Jan 25 01:52 UTC | 27 Jan 25 01:52 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-903003 ip                                                                            | addons-903003        | jenkins | v1.35.0 | 27 Jan 25 01:54 UTC | 27 Jan 25 01:54 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 01:48:10
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 01:48:10.072575  905591 out.go:345] Setting OutFile to fd 1 ...
	I0127 01:48:10.072845  905591 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 01:48:10.072856  905591 out.go:358] Setting ErrFile to fd 2...
	I0127 01:48:10.072861  905591 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 01:48:10.073125  905591 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-897624/.minikube/bin
	I0127 01:48:10.074423  905591 out.go:352] Setting JSON to false
	I0127 01:48:10.075670  905591 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":9033,"bootTime":1737933457,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 01:48:10.075740  905591 start.go:139] virtualization: kvm guest
	I0127 01:48:10.077525  905591 out.go:177] * [addons-903003] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 01:48:10.078654  905591 notify.go:220] Checking for updates...
	I0127 01:48:10.078669  905591 out.go:177]   - MINIKUBE_LOCATION=20316
	I0127 01:48:10.079871  905591 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 01:48:10.081243  905591 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20316-897624/kubeconfig
	I0127 01:48:10.082502  905591 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-897624/.minikube
	I0127 01:48:10.083749  905591 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 01:48:10.085071  905591 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 01:48:10.086541  905591 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 01:48:10.118243  905591 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 01:48:10.119423  905591 start.go:297] selected driver: kvm2
	I0127 01:48:10.119444  905591 start.go:901] validating driver "kvm2" against <nil>
	I0127 01:48:10.119462  905591 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 01:48:10.120209  905591 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 01:48:10.120310  905591 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20316-897624/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 01:48:10.135476  905591 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 01:48:10.135525  905591 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 01:48:10.135771  905591 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 01:48:10.135809  905591 cni.go:84] Creating CNI manager for ""
	I0127 01:48:10.135861  905591 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 01:48:10.135871  905591 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 01:48:10.135922  905591 start.go:340] cluster config:
	{Name:addons-903003 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-903003 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPause
Interval:1m0s}
	I0127 01:48:10.136028  905591 iso.go:125] acquiring lock: {Name:mkbbe08fa3daa2372045069667a0a52e0e34abd9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 01:48:10.137746  905591 out.go:177] * Starting "addons-903003" primary control-plane node in "addons-903003" cluster
	I0127 01:48:10.138938  905591 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 01:48:10.138983  905591 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 01:48:10.138995  905591 cache.go:56] Caching tarball of preloaded images
	I0127 01:48:10.139098  905591 preload.go:172] Found /home/jenkins/minikube-integration/20316-897624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 01:48:10.139109  905591 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 01:48:10.139411  905591 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/config.json ...
	I0127 01:48:10.139432  905591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/config.json: {Name:mked7cc2e76e1b3569332838d117022ee9bea1a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 01:48:10.139570  905591 start.go:360] acquireMachinesLock for addons-903003: {Name:mke71211974c740845fb9b00041df14f4e3ecd74 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 01:48:10.139621  905591 start.go:364] duration metric: took 32.302µs to acquireMachinesLock for "addons-903003"
	I0127 01:48:10.139640  905591 start.go:93] Provisioning new machine with config: &{Name:addons-903003 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-903003 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 01:48:10.139694  905591 start.go:125] createHost starting for "" (driver="kvm2")
	I0127 01:48:10.141115  905591 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0127 01:48:10.141256  905591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 01:48:10.141305  905591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 01:48:10.155867  905591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37265
	I0127 01:48:10.156407  905591 main.go:141] libmachine: () Calling .GetVersion
	I0127 01:48:10.157086  905591 main.go:141] libmachine: Using API Version  1
	I0127 01:48:10.157110  905591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 01:48:10.157481  905591 main.go:141] libmachine: () Calling .GetMachineName
	I0127 01:48:10.157683  905591 main.go:141] libmachine: (addons-903003) Calling .GetMachineName
	I0127 01:48:10.157828  905591 main.go:141] libmachine: (addons-903003) Calling .DriverName
	I0127 01:48:10.157960  905591 start.go:159] libmachine.API.Create for "addons-903003" (driver="kvm2")
	I0127 01:48:10.157989  905591 client.go:168] LocalClient.Create starting
	I0127 01:48:10.158024  905591 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem
	I0127 01:48:10.215151  905591 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/cert.pem
	I0127 01:48:10.276640  905591 main.go:141] libmachine: Running pre-create checks...
	I0127 01:48:10.276665  905591 main.go:141] libmachine: (addons-903003) Calling .PreCreateCheck
	I0127 01:48:10.277217  905591 main.go:141] libmachine: (addons-903003) Calling .GetConfigRaw
	I0127 01:48:10.277661  905591 main.go:141] libmachine: Creating machine...
	I0127 01:48:10.277676  905591 main.go:141] libmachine: (addons-903003) Calling .Create
	I0127 01:48:10.277820  905591 main.go:141] libmachine: (addons-903003) creating KVM machine...
	I0127 01:48:10.277833  905591 main.go:141] libmachine: (addons-903003) creating network...
	I0127 01:48:10.279088  905591 main.go:141] libmachine: (addons-903003) DBG | found existing default KVM network
	I0127 01:48:10.279937  905591 main.go:141] libmachine: (addons-903003) DBG | I0127 01:48:10.279771  905613 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002091f0}
	I0127 01:48:10.279961  905591 main.go:141] libmachine: (addons-903003) DBG | created network xml: 
	I0127 01:48:10.279970  905591 main.go:141] libmachine: (addons-903003) DBG | <network>
	I0127 01:48:10.279976  905591 main.go:141] libmachine: (addons-903003) DBG |   <name>mk-addons-903003</name>
	I0127 01:48:10.279985  905591 main.go:141] libmachine: (addons-903003) DBG |   <dns enable='no'/>
	I0127 01:48:10.279995  905591 main.go:141] libmachine: (addons-903003) DBG |   
	I0127 01:48:10.280005  905591 main.go:141] libmachine: (addons-903003) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0127 01:48:10.280016  905591 main.go:141] libmachine: (addons-903003) DBG |     <dhcp>
	I0127 01:48:10.280023  905591 main.go:141] libmachine: (addons-903003) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0127 01:48:10.280032  905591 main.go:141] libmachine: (addons-903003) DBG |     </dhcp>
	I0127 01:48:10.280038  905591 main.go:141] libmachine: (addons-903003) DBG |   </ip>
	I0127 01:48:10.280044  905591 main.go:141] libmachine: (addons-903003) DBG |   
	I0127 01:48:10.280049  905591 main.go:141] libmachine: (addons-903003) DBG | </network>
	I0127 01:48:10.280056  905591 main.go:141] libmachine: (addons-903003) DBG | 
	I0127 01:48:10.285392  905591 main.go:141] libmachine: (addons-903003) DBG | trying to create private KVM network mk-addons-903003 192.168.39.0/24...
	I0127 01:48:10.352517  905591 main.go:141] libmachine: (addons-903003) DBG | private KVM network mk-addons-903003 192.168.39.0/24 created
	I0127 01:48:10.352553  905591 main.go:141] libmachine: (addons-903003) DBG | I0127 01:48:10.352459  905613 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20316-897624/.minikube
	I0127 01:48:10.352565  905591 main.go:141] libmachine: (addons-903003) setting up store path in /home/jenkins/minikube-integration/20316-897624/.minikube/machines/addons-903003 ...
	I0127 01:48:10.352598  905591 main.go:141] libmachine: (addons-903003) building disk image from file:///home/jenkins/minikube-integration/20316-897624/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 01:48:10.352616  905591 main.go:141] libmachine: (addons-903003) Downloading /home/jenkins/minikube-integration/20316-897624/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20316-897624/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0127 01:48:10.659574  905591 main.go:141] libmachine: (addons-903003) DBG | I0127 01:48:10.659405  905613 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/addons-903003/id_rsa...
	I0127 01:48:10.816356  905591 main.go:141] libmachine: (addons-903003) DBG | I0127 01:48:10.816183  905613 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/addons-903003/addons-903003.rawdisk...
	I0127 01:48:10.816392  905591 main.go:141] libmachine: (addons-903003) DBG | Writing magic tar header
	I0127 01:48:10.816406  905591 main.go:141] libmachine: (addons-903003) DBG | Writing SSH key tar header
	I0127 01:48:10.816419  905591 main.go:141] libmachine: (addons-903003) DBG | I0127 01:48:10.816307  905613 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20316-897624/.minikube/machines/addons-903003 ...
	I0127 01:48:10.816435  905591 main.go:141] libmachine: (addons-903003) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/addons-903003
	I0127 01:48:10.816468  905591 main.go:141] libmachine: (addons-903003) setting executable bit set on /home/jenkins/minikube-integration/20316-897624/.minikube/machines/addons-903003 (perms=drwx------)
	I0127 01:48:10.816484  905591 main.go:141] libmachine: (addons-903003) setting executable bit set on /home/jenkins/minikube-integration/20316-897624/.minikube/machines (perms=drwxr-xr-x)
	I0127 01:48:10.816491  905591 main.go:141] libmachine: (addons-903003) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20316-897624/.minikube/machines
	I0127 01:48:10.816500  905591 main.go:141] libmachine: (addons-903003) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20316-897624/.minikube
	I0127 01:48:10.816529  905591 main.go:141] libmachine: (addons-903003) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20316-897624
	I0127 01:48:10.816541  905591 main.go:141] libmachine: (addons-903003) setting executable bit set on /home/jenkins/minikube-integration/20316-897624/.minikube (perms=drwxr-xr-x)
	I0127 01:48:10.816551  905591 main.go:141] libmachine: (addons-903003) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0127 01:48:10.816566  905591 main.go:141] libmachine: (addons-903003) DBG | checking permissions on dir: /home/jenkins
	I0127 01:48:10.816572  905591 main.go:141] libmachine: (addons-903003) DBG | checking permissions on dir: /home
	I0127 01:48:10.816578  905591 main.go:141] libmachine: (addons-903003) setting executable bit set on /home/jenkins/minikube-integration/20316-897624 (perms=drwxrwxr-x)
	I0127 01:48:10.816585  905591 main.go:141] libmachine: (addons-903003) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0127 01:48:10.816590  905591 main.go:141] libmachine: (addons-903003) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0127 01:48:10.816599  905591 main.go:141] libmachine: (addons-903003) creating domain...
	I0127 01:48:10.816614  905591 main.go:141] libmachine: (addons-903003) DBG | skipping /home - not owner
	I0127 01:48:10.817786  905591 main.go:141] libmachine: (addons-903003) define libvirt domain using xml: 
	I0127 01:48:10.817824  905591 main.go:141] libmachine: (addons-903003) <domain type='kvm'>
	I0127 01:48:10.817832  905591 main.go:141] libmachine: (addons-903003)   <name>addons-903003</name>
	I0127 01:48:10.817837  905591 main.go:141] libmachine: (addons-903003)   <memory unit='MiB'>4000</memory>
	I0127 01:48:10.817845  905591 main.go:141] libmachine: (addons-903003)   <vcpu>2</vcpu>
	I0127 01:48:10.817854  905591 main.go:141] libmachine: (addons-903003)   <features>
	I0127 01:48:10.817863  905591 main.go:141] libmachine: (addons-903003)     <acpi/>
	I0127 01:48:10.817870  905591 main.go:141] libmachine: (addons-903003)     <apic/>
	I0127 01:48:10.817904  905591 main.go:141] libmachine: (addons-903003)     <pae/>
	I0127 01:48:10.817925  905591 main.go:141] libmachine: (addons-903003)     
	I0127 01:48:10.817934  905591 main.go:141] libmachine: (addons-903003)   </features>
	I0127 01:48:10.817939  905591 main.go:141] libmachine: (addons-903003)   <cpu mode='host-passthrough'>
	I0127 01:48:10.817968  905591 main.go:141] libmachine: (addons-903003)   
	I0127 01:48:10.817988  905591 main.go:141] libmachine: (addons-903003)   </cpu>
	I0127 01:48:10.818000  905591 main.go:141] libmachine: (addons-903003)   <os>
	I0127 01:48:10.818011  905591 main.go:141] libmachine: (addons-903003)     <type>hvm</type>
	I0127 01:48:10.818022  905591 main.go:141] libmachine: (addons-903003)     <boot dev='cdrom'/>
	I0127 01:48:10.818031  905591 main.go:141] libmachine: (addons-903003)     <boot dev='hd'/>
	I0127 01:48:10.818042  905591 main.go:141] libmachine: (addons-903003)     <bootmenu enable='no'/>
	I0127 01:48:10.818052  905591 main.go:141] libmachine: (addons-903003)   </os>
	I0127 01:48:10.818068  905591 main.go:141] libmachine: (addons-903003)   <devices>
	I0127 01:48:10.818083  905591 main.go:141] libmachine: (addons-903003)     <disk type='file' device='cdrom'>
	I0127 01:48:10.818101  905591 main.go:141] libmachine: (addons-903003)       <source file='/home/jenkins/minikube-integration/20316-897624/.minikube/machines/addons-903003/boot2docker.iso'/>
	I0127 01:48:10.818112  905591 main.go:141] libmachine: (addons-903003)       <target dev='hdc' bus='scsi'/>
	I0127 01:48:10.818121  905591 main.go:141] libmachine: (addons-903003)       <readonly/>
	I0127 01:48:10.818139  905591 main.go:141] libmachine: (addons-903003)     </disk>
	I0127 01:48:10.818153  905591 main.go:141] libmachine: (addons-903003)     <disk type='file' device='disk'>
	I0127 01:48:10.818166  905591 main.go:141] libmachine: (addons-903003)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0127 01:48:10.818187  905591 main.go:141] libmachine: (addons-903003)       <source file='/home/jenkins/minikube-integration/20316-897624/.minikube/machines/addons-903003/addons-903003.rawdisk'/>
	I0127 01:48:10.818203  905591 main.go:141] libmachine: (addons-903003)       <target dev='hda' bus='virtio'/>
	I0127 01:48:10.818216  905591 main.go:141] libmachine: (addons-903003)     </disk>
	I0127 01:48:10.818230  905591 main.go:141] libmachine: (addons-903003)     <interface type='network'>
	I0127 01:48:10.818247  905591 main.go:141] libmachine: (addons-903003)       <source network='mk-addons-903003'/>
	I0127 01:48:10.818258  905591 main.go:141] libmachine: (addons-903003)       <model type='virtio'/>
	I0127 01:48:10.818266  905591 main.go:141] libmachine: (addons-903003)     </interface>
	I0127 01:48:10.818276  905591 main.go:141] libmachine: (addons-903003)     <interface type='network'>
	I0127 01:48:10.818284  905591 main.go:141] libmachine: (addons-903003)       <source network='default'/>
	I0127 01:48:10.818294  905591 main.go:141] libmachine: (addons-903003)       <model type='virtio'/>
	I0127 01:48:10.818306  905591 main.go:141] libmachine: (addons-903003)     </interface>
	I0127 01:48:10.818320  905591 main.go:141] libmachine: (addons-903003)     <serial type='pty'>
	I0127 01:48:10.818354  905591 main.go:141] libmachine: (addons-903003)       <target port='0'/>
	I0127 01:48:10.818378  905591 main.go:141] libmachine: (addons-903003)     </serial>
	I0127 01:48:10.818391  905591 main.go:141] libmachine: (addons-903003)     <console type='pty'>
	I0127 01:48:10.818403  905591 main.go:141] libmachine: (addons-903003)       <target type='serial' port='0'/>
	I0127 01:48:10.818415  905591 main.go:141] libmachine: (addons-903003)     </console>
	I0127 01:48:10.818425  905591 main.go:141] libmachine: (addons-903003)     <rng model='virtio'>
	I0127 01:48:10.818435  905591 main.go:141] libmachine: (addons-903003)       <backend model='random'>/dev/random</backend>
	I0127 01:48:10.818445  905591 main.go:141] libmachine: (addons-903003)     </rng>
	I0127 01:48:10.818452  905591 main.go:141] libmachine: (addons-903003)     
	I0127 01:48:10.818463  905591 main.go:141] libmachine: (addons-903003)     
	I0127 01:48:10.818472  905591 main.go:141] libmachine: (addons-903003)   </devices>
	I0127 01:48:10.818482  905591 main.go:141] libmachine: (addons-903003) </domain>
	I0127 01:48:10.818493  905591 main.go:141] libmachine: (addons-903003) 
	I0127 01:48:10.822881  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined MAC address 52:54:00:4a:90:7e in network default
	I0127 01:48:10.823480  905591 main.go:141] libmachine: (addons-903003) starting domain...
	I0127 01:48:10.823502  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:10.823511  905591 main.go:141] libmachine: (addons-903003) ensuring networks are active...
	I0127 01:48:10.824204  905591 main.go:141] libmachine: (addons-903003) Ensuring network default is active
	I0127 01:48:10.824576  905591 main.go:141] libmachine: (addons-903003) Ensuring network mk-addons-903003 is active
	I0127 01:48:10.825108  905591 main.go:141] libmachine: (addons-903003) getting domain XML...
	I0127 01:48:10.825843  905591 main.go:141] libmachine: (addons-903003) creating domain...
	I0127 01:48:12.024459  905591 main.go:141] libmachine: (addons-903003) waiting for IP...
	I0127 01:48:12.025334  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:12.025763  905591 main.go:141] libmachine: (addons-903003) DBG | unable to find current IP address of domain addons-903003 in network mk-addons-903003
	I0127 01:48:12.025810  905591 main.go:141] libmachine: (addons-903003) DBG | I0127 01:48:12.025730  905613 retry.go:31] will retry after 194.390567ms: waiting for domain to come up
	I0127 01:48:12.222328  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:12.222712  905591 main.go:141] libmachine: (addons-903003) DBG | unable to find current IP address of domain addons-903003 in network mk-addons-903003
	I0127 01:48:12.222743  905591 main.go:141] libmachine: (addons-903003) DBG | I0127 01:48:12.222691  905613 retry.go:31] will retry after 297.972714ms: waiting for domain to come up
	I0127 01:48:12.522263  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:12.522682  905591 main.go:141] libmachine: (addons-903003) DBG | unable to find current IP address of domain addons-903003 in network mk-addons-903003
	I0127 01:48:12.522718  905591 main.go:141] libmachine: (addons-903003) DBG | I0127 01:48:12.522644  905613 retry.go:31] will retry after 398.397752ms: waiting for domain to come up
	I0127 01:48:12.923100  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:12.923500  905591 main.go:141] libmachine: (addons-903003) DBG | unable to find current IP address of domain addons-903003 in network mk-addons-903003
	I0127 01:48:12.923609  905591 main.go:141] libmachine: (addons-903003) DBG | I0127 01:48:12.923468  905613 retry.go:31] will retry after 485.965007ms: waiting for domain to come up
	I0127 01:48:13.411076  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:13.411663  905591 main.go:141] libmachine: (addons-903003) DBG | unable to find current IP address of domain addons-903003 in network mk-addons-903003
	I0127 01:48:13.411694  905591 main.go:141] libmachine: (addons-903003) DBG | I0127 01:48:13.411615  905613 retry.go:31] will retry after 634.581393ms: waiting for domain to come up
	I0127 01:48:14.047500  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:14.047985  905591 main.go:141] libmachine: (addons-903003) DBG | unable to find current IP address of domain addons-903003 in network mk-addons-903003
	I0127 01:48:14.048018  905591 main.go:141] libmachine: (addons-903003) DBG | I0127 01:48:14.047912  905613 retry.go:31] will retry after 907.953059ms: waiting for domain to come up
	I0127 01:48:14.957192  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:14.957639  905591 main.go:141] libmachine: (addons-903003) DBG | unable to find current IP address of domain addons-903003 in network mk-addons-903003
	I0127 01:48:14.957674  905591 main.go:141] libmachine: (addons-903003) DBG | I0127 01:48:14.957622  905613 retry.go:31] will retry after 887.048326ms: waiting for domain to come up
	I0127 01:48:15.846263  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:15.846776  905591 main.go:141] libmachine: (addons-903003) DBG | unable to find current IP address of domain addons-903003 in network mk-addons-903003
	I0127 01:48:15.846814  905591 main.go:141] libmachine: (addons-903003) DBG | I0127 01:48:15.846750  905613 retry.go:31] will retry after 1.409041993s: waiting for domain to come up
	I0127 01:48:17.257083  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:17.257479  905591 main.go:141] libmachine: (addons-903003) DBG | unable to find current IP address of domain addons-903003 in network mk-addons-903003
	I0127 01:48:17.257506  905591 main.go:141] libmachine: (addons-903003) DBG | I0127 01:48:17.257454  905613 retry.go:31] will retry after 1.585243155s: waiting for domain to come up
	I0127 01:48:18.845200  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:18.845640  905591 main.go:141] libmachine: (addons-903003) DBG | unable to find current IP address of domain addons-903003 in network mk-addons-903003
	I0127 01:48:18.845669  905591 main.go:141] libmachine: (addons-903003) DBG | I0127 01:48:18.845615  905613 retry.go:31] will retry after 1.735122066s: waiting for domain to come up
	I0127 01:48:20.582571  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:20.583053  905591 main.go:141] libmachine: (addons-903003) DBG | unable to find current IP address of domain addons-903003 in network mk-addons-903003
	I0127 01:48:20.583094  905591 main.go:141] libmachine: (addons-903003) DBG | I0127 01:48:20.583006  905613 retry.go:31] will retry after 1.962206895s: waiting for domain to come up
	I0127 01:48:22.548158  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:22.548641  905591 main.go:141] libmachine: (addons-903003) DBG | unable to find current IP address of domain addons-903003 in network mk-addons-903003
	I0127 01:48:22.548665  905591 main.go:141] libmachine: (addons-903003) DBG | I0127 01:48:22.548598  905613 retry.go:31] will retry after 2.339393867s: waiting for domain to come up
	I0127 01:48:24.890450  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:24.890966  905591 main.go:141] libmachine: (addons-903003) DBG | unable to find current IP address of domain addons-903003 in network mk-addons-903003
	I0127 01:48:24.890996  905591 main.go:141] libmachine: (addons-903003) DBG | I0127 01:48:24.890870  905613 retry.go:31] will retry after 2.92211726s: waiting for domain to come up
	I0127 01:48:27.816610  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:27.817015  905591 main.go:141] libmachine: (addons-903003) DBG | unable to find current IP address of domain addons-903003 in network mk-addons-903003
	I0127 01:48:27.817038  905591 main.go:141] libmachine: (addons-903003) DBG | I0127 01:48:27.816998  905613 retry.go:31] will retry after 4.719323719s: waiting for domain to come up
	I0127 01:48:32.539027  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:32.539456  905591 main.go:141] libmachine: (addons-903003) found domain IP: 192.168.39.61
	I0127 01:48:32.539484  905591 main.go:141] libmachine: (addons-903003) reserving static IP address...
	I0127 01:48:32.539497  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has current primary IP address 192.168.39.61 and MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:32.539780  905591 main.go:141] libmachine: (addons-903003) DBG | unable to find host DHCP lease matching {name: "addons-903003", mac: "52:54:00:2d:89:fd", ip: "192.168.39.61"} in network mk-addons-903003
	I0127 01:48:32.611290  905591 main.go:141] libmachine: (addons-903003) reserved static IP address 192.168.39.61 for domain addons-903003
	I0127 01:48:32.611331  905591 main.go:141] libmachine: (addons-903003) DBG | Getting to WaitForSSH function...
	I0127 01:48:32.611340  905591 main.go:141] libmachine: (addons-903003) waiting for SSH...
	I0127 01:48:32.614062  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:32.614287  905591 main.go:141] libmachine: (addons-903003) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:2d:89:fd", ip: ""} in network mk-addons-903003
	I0127 01:48:32.614320  905591 main.go:141] libmachine: (addons-903003) DBG | unable to find defined IP address of network mk-addons-903003 interface with MAC address 52:54:00:2d:89:fd
	I0127 01:48:32.614455  905591 main.go:141] libmachine: (addons-903003) DBG | Using SSH client type: external
	I0127 01:48:32.614477  905591 main.go:141] libmachine: (addons-903003) DBG | Using SSH private key: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/addons-903003/id_rsa (-rw-------)
	I0127 01:48:32.614537  905591 main.go:141] libmachine: (addons-903003) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20316-897624/.minikube/machines/addons-903003/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 01:48:32.614560  905591 main.go:141] libmachine: (addons-903003) DBG | About to run SSH command:
	I0127 01:48:32.614607  905591 main.go:141] libmachine: (addons-903003) DBG | exit 0
	I0127 01:48:32.618257  905591 main.go:141] libmachine: (addons-903003) DBG | SSH cmd err, output: exit status 255: 
	I0127 01:48:32.618279  905591 main.go:141] libmachine: (addons-903003) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0127 01:48:32.618289  905591 main.go:141] libmachine: (addons-903003) DBG | command : exit 0
	I0127 01:48:32.618312  905591 main.go:141] libmachine: (addons-903003) DBG | err     : exit status 255
	I0127 01:48:32.618325  905591 main.go:141] libmachine: (addons-903003) DBG | output  : 
	I0127 01:48:35.619138  905591 main.go:141] libmachine: (addons-903003) DBG | Getting to WaitForSSH function...
	I0127 01:48:35.621695  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:35.622171  905591 main.go:141] libmachine: (addons-903003) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:89:fd", ip: ""} in network mk-addons-903003: {Iface:virbr1 ExpiryTime:2025-01-27 02:48:24 +0000 UTC Type:0 Mac:52:54:00:2d:89:fd Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:addons-903003 Clientid:01:52:54:00:2d:89:fd}
	I0127 01:48:35.622195  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined IP address 192.168.39.61 and MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:35.622335  905591 main.go:141] libmachine: (addons-903003) DBG | Using SSH client type: external
	I0127 01:48:35.622361  905591 main.go:141] libmachine: (addons-903003) DBG | Using SSH private key: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/addons-903003/id_rsa (-rw-------)
	I0127 01:48:35.622417  905591 main.go:141] libmachine: (addons-903003) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.61 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20316-897624/.minikube/machines/addons-903003/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 01:48:35.622446  905591 main.go:141] libmachine: (addons-903003) DBG | About to run SSH command:
	I0127 01:48:35.622462  905591 main.go:141] libmachine: (addons-903003) DBG | exit 0
	I0127 01:48:35.744805  905591 main.go:141] libmachine: (addons-903003) DBG | SSH cmd err, output: <nil>: 
	I0127 01:48:35.745090  905591 main.go:141] libmachine: (addons-903003) KVM machine creation complete
	I0127 01:48:35.745423  905591 main.go:141] libmachine: (addons-903003) Calling .GetConfigRaw
	I0127 01:48:35.746020  905591 main.go:141] libmachine: (addons-903003) Calling .DriverName
	I0127 01:48:35.746232  905591 main.go:141] libmachine: (addons-903003) Calling .DriverName
	I0127 01:48:35.746386  905591 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0127 01:48:35.746403  905591 main.go:141] libmachine: (addons-903003) Calling .GetState
	I0127 01:48:35.747578  905591 main.go:141] libmachine: Detecting operating system of created instance...
	I0127 01:48:35.747592  905591 main.go:141] libmachine: Waiting for SSH to be available...
	I0127 01:48:35.747597  905591 main.go:141] libmachine: Getting to WaitForSSH function...
	I0127 01:48:35.747602  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHHostname
	I0127 01:48:35.749808  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:35.750188  905591 main.go:141] libmachine: (addons-903003) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:89:fd", ip: ""} in network mk-addons-903003: {Iface:virbr1 ExpiryTime:2025-01-27 02:48:24 +0000 UTC Type:0 Mac:52:54:00:2d:89:fd Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:addons-903003 Clientid:01:52:54:00:2d:89:fd}
	I0127 01:48:35.750214  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined IP address 192.168.39.61 and MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:35.750365  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHPort
	I0127 01:48:35.750544  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHKeyPath
	I0127 01:48:35.750700  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHKeyPath
	I0127 01:48:35.750830  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHUsername
	I0127 01:48:35.750991  905591 main.go:141] libmachine: Using SSH client type: native
	I0127 01:48:35.751217  905591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0127 01:48:35.751233  905591 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0127 01:48:35.852206  905591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 01:48:35.852235  905591 main.go:141] libmachine: Detecting the provisioner...
	I0127 01:48:35.852244  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHHostname
	I0127 01:48:35.854886  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:35.855235  905591 main.go:141] libmachine: (addons-903003) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:89:fd", ip: ""} in network mk-addons-903003: {Iface:virbr1 ExpiryTime:2025-01-27 02:48:24 +0000 UTC Type:0 Mac:52:54:00:2d:89:fd Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:addons-903003 Clientid:01:52:54:00:2d:89:fd}
	I0127 01:48:35.855268  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined IP address 192.168.39.61 and MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:35.855340  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHPort
	I0127 01:48:35.855627  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHKeyPath
	I0127 01:48:35.855819  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHKeyPath
	I0127 01:48:35.855987  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHUsername
	I0127 01:48:35.856160  905591 main.go:141] libmachine: Using SSH client type: native
	I0127 01:48:35.856341  905591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0127 01:48:35.856352  905591 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0127 01:48:35.961514  905591 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0127 01:48:35.961604  905591 main.go:141] libmachine: found compatible host: buildroot
	I0127 01:48:35.961611  905591 main.go:141] libmachine: Provisioning with buildroot...
	I0127 01:48:35.961620  905591 main.go:141] libmachine: (addons-903003) Calling .GetMachineName
	I0127 01:48:35.961927  905591 buildroot.go:166] provisioning hostname "addons-903003"
	I0127 01:48:35.961955  905591 main.go:141] libmachine: (addons-903003) Calling .GetMachineName
	I0127 01:48:35.962106  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHHostname
	I0127 01:48:35.964853  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:35.965257  905591 main.go:141] libmachine: (addons-903003) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:89:fd", ip: ""} in network mk-addons-903003: {Iface:virbr1 ExpiryTime:2025-01-27 02:48:24 +0000 UTC Type:0 Mac:52:54:00:2d:89:fd Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:addons-903003 Clientid:01:52:54:00:2d:89:fd}
	I0127 01:48:35.965289  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined IP address 192.168.39.61 and MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:35.965468  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHPort
	I0127 01:48:35.965687  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHKeyPath
	I0127 01:48:35.965841  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHKeyPath
	I0127 01:48:35.965954  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHUsername
	I0127 01:48:35.966105  905591 main.go:141] libmachine: Using SSH client type: native
	I0127 01:48:35.966333  905591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0127 01:48:35.966350  905591 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-903003 && echo "addons-903003" | sudo tee /etc/hostname
	I0127 01:48:36.084078  905591 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-903003
	
	I0127 01:48:36.084115  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHHostname
	I0127 01:48:36.086958  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:36.087296  905591 main.go:141] libmachine: (addons-903003) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:89:fd", ip: ""} in network mk-addons-903003: {Iface:virbr1 ExpiryTime:2025-01-27 02:48:24 +0000 UTC Type:0 Mac:52:54:00:2d:89:fd Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:addons-903003 Clientid:01:52:54:00:2d:89:fd}
	I0127 01:48:36.087318  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined IP address 192.168.39.61 and MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:36.087494  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHPort
	I0127 01:48:36.087675  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHKeyPath
	I0127 01:48:36.087865  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHKeyPath
	I0127 01:48:36.087978  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHUsername
	I0127 01:48:36.088166  905591 main.go:141] libmachine: Using SSH client type: native
	I0127 01:48:36.088338  905591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0127 01:48:36.088353  905591 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-903003' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-903003/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-903003' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 01:48:36.197012  905591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 01:48:36.197052  905591 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20316-897624/.minikube CaCertPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20316-897624/.minikube}
	I0127 01:48:36.197080  905591 buildroot.go:174] setting up certificates
	I0127 01:48:36.197096  905591 provision.go:84] configureAuth start
	I0127 01:48:36.197110  905591 main.go:141] libmachine: (addons-903003) Calling .GetMachineName
	I0127 01:48:36.197398  905591 main.go:141] libmachine: (addons-903003) Calling .GetIP
	I0127 01:48:36.200183  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:36.200588  905591 main.go:141] libmachine: (addons-903003) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:89:fd", ip: ""} in network mk-addons-903003: {Iface:virbr1 ExpiryTime:2025-01-27 02:48:24 +0000 UTC Type:0 Mac:52:54:00:2d:89:fd Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:addons-903003 Clientid:01:52:54:00:2d:89:fd}
	I0127 01:48:36.200626  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined IP address 192.168.39.61 and MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:36.200724  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHHostname
	I0127 01:48:36.202792  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:36.203104  905591 main.go:141] libmachine: (addons-903003) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:89:fd", ip: ""} in network mk-addons-903003: {Iface:virbr1 ExpiryTime:2025-01-27 02:48:24 +0000 UTC Type:0 Mac:52:54:00:2d:89:fd Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:addons-903003 Clientid:01:52:54:00:2d:89:fd}
	I0127 01:48:36.203137  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined IP address 192.168.39.61 and MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:36.203277  905591 provision.go:143] copyHostCerts
	I0127 01:48:36.203354  905591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20316-897624/.minikube/ca.pem (1078 bytes)
	I0127 01:48:36.203494  905591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20316-897624/.minikube/cert.pem (1123 bytes)
	I0127 01:48:36.203576  905591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20316-897624/.minikube/key.pem (1679 bytes)
	I0127 01:48:36.203631  905591 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca-key.pem org=jenkins.addons-903003 san=[127.0.0.1 192.168.39.61 addons-903003 localhost minikube]
	I0127 01:48:36.300642  905591 provision.go:177] copyRemoteCerts
	I0127 01:48:36.300736  905591 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 01:48:36.300768  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHHostname
	I0127 01:48:36.303577  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:36.303893  905591 main.go:141] libmachine: (addons-903003) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:89:fd", ip: ""} in network mk-addons-903003: {Iface:virbr1 ExpiryTime:2025-01-27 02:48:24 +0000 UTC Type:0 Mac:52:54:00:2d:89:fd Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:addons-903003 Clientid:01:52:54:00:2d:89:fd}
	I0127 01:48:36.303920  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined IP address 192.168.39.61 and MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:36.304159  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHPort
	I0127 01:48:36.304358  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHKeyPath
	I0127 01:48:36.304489  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHUsername
	I0127 01:48:36.304635  905591 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/addons-903003/id_rsa Username:docker}
	I0127 01:48:36.387012  905591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 01:48:36.409496  905591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0127 01:48:36.431089  905591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 01:48:36.452598  905591 provision.go:87] duration metric: took 255.482ms to configureAuth
	I0127 01:48:36.452634  905591 buildroot.go:189] setting minikube options for container-runtime
	I0127 01:48:36.452837  905591 config.go:182] Loaded profile config "addons-903003": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 01:48:36.452939  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHHostname
	I0127 01:48:36.455648  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:36.455999  905591 main.go:141] libmachine: (addons-903003) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:89:fd", ip: ""} in network mk-addons-903003: {Iface:virbr1 ExpiryTime:2025-01-27 02:48:24 +0000 UTC Type:0 Mac:52:54:00:2d:89:fd Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:addons-903003 Clientid:01:52:54:00:2d:89:fd}
	I0127 01:48:36.456048  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined IP address 192.168.39.61 and MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:36.456188  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHPort
	I0127 01:48:36.456376  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHKeyPath
	I0127 01:48:36.456506  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHKeyPath
	I0127 01:48:36.456622  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHUsername
	I0127 01:48:36.456762  905591 main.go:141] libmachine: Using SSH client type: native
	I0127 01:48:36.456988  905591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0127 01:48:36.457008  905591 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 01:48:36.671015  905591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 01:48:36.671048  905591 main.go:141] libmachine: Checking connection to Docker...
	I0127 01:48:36.671056  905591 main.go:141] libmachine: (addons-903003) Calling .GetURL
	I0127 01:48:36.672248  905591 main.go:141] libmachine: (addons-903003) DBG | using libvirt version 6000000
	I0127 01:48:36.674356  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:36.674645  905591 main.go:141] libmachine: (addons-903003) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:89:fd", ip: ""} in network mk-addons-903003: {Iface:virbr1 ExpiryTime:2025-01-27 02:48:24 +0000 UTC Type:0 Mac:52:54:00:2d:89:fd Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:addons-903003 Clientid:01:52:54:00:2d:89:fd}
	I0127 01:48:36.674681  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined IP address 192.168.39.61 and MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:36.674766  905591 main.go:141] libmachine: Docker is up and running!
	I0127 01:48:36.674783  905591 main.go:141] libmachine: Reticulating splines...
	I0127 01:48:36.674793  905591 client.go:171] duration metric: took 26.516792438s to LocalClient.Create
	I0127 01:48:36.674819  905591 start.go:167] duration metric: took 26.516859865s to libmachine.API.Create "addons-903003"
	I0127 01:48:36.674831  905591 start.go:293] postStartSetup for "addons-903003" (driver="kvm2")
	I0127 01:48:36.674842  905591 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 01:48:36.674865  905591 main.go:141] libmachine: (addons-903003) Calling .DriverName
	I0127 01:48:36.675132  905591 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 01:48:36.675157  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHHostname
	I0127 01:48:36.677056  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:36.677303  905591 main.go:141] libmachine: (addons-903003) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:89:fd", ip: ""} in network mk-addons-903003: {Iface:virbr1 ExpiryTime:2025-01-27 02:48:24 +0000 UTC Type:0 Mac:52:54:00:2d:89:fd Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:addons-903003 Clientid:01:52:54:00:2d:89:fd}
	I0127 01:48:36.677324  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined IP address 192.168.39.61 and MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:36.677461  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHPort
	I0127 01:48:36.677625  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHKeyPath
	I0127 01:48:36.677822  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHUsername
	I0127 01:48:36.677947  905591 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/addons-903003/id_rsa Username:docker}
	I0127 01:48:36.758842  905591 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 01:48:36.762670  905591 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 01:48:36.762700  905591 filesync.go:126] Scanning /home/jenkins/minikube-integration/20316-897624/.minikube/addons for local assets ...
	I0127 01:48:36.762763  905591 filesync.go:126] Scanning /home/jenkins/minikube-integration/20316-897624/.minikube/files for local assets ...
	I0127 01:48:36.762804  905591 start.go:296] duration metric: took 87.968166ms for postStartSetup
	I0127 01:48:36.762842  905591 main.go:141] libmachine: (addons-903003) Calling .GetConfigRaw
	I0127 01:48:36.763465  905591 main.go:141] libmachine: (addons-903003) Calling .GetIP
	I0127 01:48:36.765905  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:36.766282  905591 main.go:141] libmachine: (addons-903003) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:89:fd", ip: ""} in network mk-addons-903003: {Iface:virbr1 ExpiryTime:2025-01-27 02:48:24 +0000 UTC Type:0 Mac:52:54:00:2d:89:fd Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:addons-903003 Clientid:01:52:54:00:2d:89:fd}
	I0127 01:48:36.766304  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined IP address 192.168.39.61 and MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:36.766556  905591 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/config.json ...
	I0127 01:48:36.766771  905591 start.go:128] duration metric: took 26.627065375s to createHost
	I0127 01:48:36.766799  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHHostname
	I0127 01:48:36.769002  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:36.769331  905591 main.go:141] libmachine: (addons-903003) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:89:fd", ip: ""} in network mk-addons-903003: {Iface:virbr1 ExpiryTime:2025-01-27 02:48:24 +0000 UTC Type:0 Mac:52:54:00:2d:89:fd Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:addons-903003 Clientid:01:52:54:00:2d:89:fd}
	I0127 01:48:36.769371  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined IP address 192.168.39.61 and MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:36.769504  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHPort
	I0127 01:48:36.769698  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHKeyPath
	I0127 01:48:36.769898  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHKeyPath
	I0127 01:48:36.770030  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHUsername
	I0127 01:48:36.770170  905591 main.go:141] libmachine: Using SSH client type: native
	I0127 01:48:36.770338  905591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0127 01:48:36.770348  905591 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 01:48:36.873336  905591 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737942516.851380199
	
	I0127 01:48:36.873365  905591 fix.go:216] guest clock: 1737942516.851380199
	I0127 01:48:36.873372  905591 fix.go:229] Guest: 2025-01-27 01:48:36.851380199 +0000 UTC Remote: 2025-01-27 01:48:36.766786683 +0000 UTC m=+26.732283097 (delta=84.593516ms)
	I0127 01:48:36.873406  905591 fix.go:200] guest clock delta is within tolerance: 84.593516ms
	I0127 01:48:36.873413  905591 start.go:83] releasing machines lock for "addons-903003", held for 26.733781731s
	I0127 01:48:36.873438  905591 main.go:141] libmachine: (addons-903003) Calling .DriverName
	I0127 01:48:36.873707  905591 main.go:141] libmachine: (addons-903003) Calling .GetIP
	I0127 01:48:36.876265  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:36.876586  905591 main.go:141] libmachine: (addons-903003) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:89:fd", ip: ""} in network mk-addons-903003: {Iface:virbr1 ExpiryTime:2025-01-27 02:48:24 +0000 UTC Type:0 Mac:52:54:00:2d:89:fd Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:addons-903003 Clientid:01:52:54:00:2d:89:fd}
	I0127 01:48:36.876615  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined IP address 192.168.39.61 and MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:36.876725  905591 main.go:141] libmachine: (addons-903003) Calling .DriverName
	I0127 01:48:36.877187  905591 main.go:141] libmachine: (addons-903003) Calling .DriverName
	I0127 01:48:36.877351  905591 main.go:141] libmachine: (addons-903003) Calling .DriverName
	I0127 01:48:36.877464  905591 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 01:48:36.877513  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHHostname
	I0127 01:48:36.877566  905591 ssh_runner.go:195] Run: cat /version.json
	I0127 01:48:36.877591  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHHostname
	I0127 01:48:36.879895  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:36.880132  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:36.880286  905591 main.go:141] libmachine: (addons-903003) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:89:fd", ip: ""} in network mk-addons-903003: {Iface:virbr1 ExpiryTime:2025-01-27 02:48:24 +0000 UTC Type:0 Mac:52:54:00:2d:89:fd Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:addons-903003 Clientid:01:52:54:00:2d:89:fd}
	I0127 01:48:36.880310  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined IP address 192.168.39.61 and MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:36.880486  905591 main.go:141] libmachine: (addons-903003) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:89:fd", ip: ""} in network mk-addons-903003: {Iface:virbr1 ExpiryTime:2025-01-27 02:48:24 +0000 UTC Type:0 Mac:52:54:00:2d:89:fd Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:addons-903003 Clientid:01:52:54:00:2d:89:fd}
	I0127 01:48:36.880510  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined IP address 192.168.39.61 and MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:36.880489  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHPort
	I0127 01:48:36.880622  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHPort
	I0127 01:48:36.880700  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHKeyPath
	I0127 01:48:36.880793  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHKeyPath
	I0127 01:48:36.880862  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHUsername
	I0127 01:48:36.880967  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHUsername
	I0127 01:48:36.881026  905591 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/addons-903003/id_rsa Username:docker}
	I0127 01:48:36.881084  905591 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/addons-903003/id_rsa Username:docker}
	I0127 01:48:36.987527  905591 ssh_runner.go:195] Run: systemctl --version
	I0127 01:48:36.993145  905591 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 01:48:37.148596  905591 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 01:48:37.154546  905591 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 01:48:37.154633  905591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 01:48:37.169618  905591 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 01:48:37.169646  905591 start.go:495] detecting cgroup driver to use...
	I0127 01:48:37.169715  905591 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 01:48:37.185125  905591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 01:48:37.198406  905591 docker.go:217] disabling cri-docker service (if available) ...
	I0127 01:48:37.198507  905591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 01:48:37.211726  905591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 01:48:37.224987  905591 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 01:48:37.331249  905591 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 01:48:37.485968  905591 docker.go:233] disabling docker service ...
	I0127 01:48:37.486053  905591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 01:48:37.499791  905591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 01:48:37.511925  905591 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 01:48:37.631397  905591 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 01:48:37.749329  905591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 01:48:37.763113  905591 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 01:48:37.780010  905591 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 01:48:37.780084  905591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 01:48:37.789586  905591 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 01:48:37.789664  905591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 01:48:37.799442  905591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 01:48:37.809032  905591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 01:48:37.818649  905591 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 01:48:37.828308  905591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 01:48:37.837829  905591 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 01:48:37.853762  905591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 01:48:37.864104  905591 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 01:48:37.873591  905591 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 01:48:37.873651  905591 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 01:48:37.885668  905591 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 01:48:37.894928  905591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 01:48:38.013738  905591 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 01:48:38.100672  905591 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 01:48:38.100768  905591 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 01:48:38.105488  905591 start.go:563] Will wait 60s for crictl version
	I0127 01:48:38.105574  905591 ssh_runner.go:195] Run: which crictl
	I0127 01:48:38.109275  905591 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 01:48:38.148132  905591 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 01:48:38.148260  905591 ssh_runner.go:195] Run: crio --version
	I0127 01:48:38.175254  905591 ssh_runner.go:195] Run: crio --version
	I0127 01:48:38.205149  905591 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 01:48:38.206405  905591 main.go:141] libmachine: (addons-903003) Calling .GetIP
	I0127 01:48:38.208821  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:38.209193  905591 main.go:141] libmachine: (addons-903003) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:89:fd", ip: ""} in network mk-addons-903003: {Iface:virbr1 ExpiryTime:2025-01-27 02:48:24 +0000 UTC Type:0 Mac:52:54:00:2d:89:fd Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:addons-903003 Clientid:01:52:54:00:2d:89:fd}
	I0127 01:48:38.209218  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined IP address 192.168.39.61 and MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:38.209437  905591 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0127 01:48:38.213575  905591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 01:48:38.226461  905591 kubeadm.go:883] updating cluster {Name:addons-903003 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-903003 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.61 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 01:48:38.226596  905591 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 01:48:38.226647  905591 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 01:48:38.258792  905591 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0127 01:48:38.258886  905591 ssh_runner.go:195] Run: which lz4
	I0127 01:48:38.262771  905591 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 01:48:38.266825  905591 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 01:48:38.266871  905591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0127 01:48:39.524548  905591 crio.go:462] duration metric: took 1.261793693s to copy over tarball
	I0127 01:48:39.524622  905591 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 01:48:41.711007  905591 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.186346764s)
	I0127 01:48:41.711055  905591 crio.go:469] duration metric: took 2.186471924s to extract the tarball
	I0127 01:48:41.711066  905591 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 01:48:41.748029  905591 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 01:48:41.786995  905591 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 01:48:41.787024  905591 cache_images.go:84] Images are preloaded, skipping loading
	I0127 01:48:41.787033  905591 kubeadm.go:934] updating node { 192.168.39.61 8443 v1.32.1 crio true true} ...
	I0127 01:48:41.787148  905591 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-903003 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.61
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:addons-903003 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 01:48:41.787216  905591 ssh_runner.go:195] Run: crio config
	I0127 01:48:41.829892  905591 cni.go:84] Creating CNI manager for ""
	I0127 01:48:41.829922  905591 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 01:48:41.829937  905591 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 01:48:41.829969  905591 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.61 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-903003 NodeName:addons-903003 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.61"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.61 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 01:48:41.830154  905591 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.61
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-903003"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.61"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.61"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 01:48:41.830242  905591 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 01:48:41.839979  905591 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 01:48:41.840053  905591 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 01:48:41.852722  905591 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0127 01:48:41.868996  905591 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 01:48:41.885691  905591 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I0127 01:48:41.901326  905591 ssh_runner.go:195] Run: grep 192.168.39.61	control-plane.minikube.internal$ /etc/hosts
	I0127 01:48:41.904859  905591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.61	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 01:48:41.916687  905591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 01:48:42.041004  905591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 01:48:42.057527  905591 certs.go:68] Setting up /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003 for IP: 192.168.39.61
	I0127 01:48:42.057557  905591 certs.go:194] generating shared ca certs ...
	I0127 01:48:42.057586  905591 certs.go:226] acquiring lock for ca certs: {Name:mk2a0a86c4d196cb3cdcd1c836209954479044e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 01:48:42.057760  905591 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20316-897624/.minikube/ca.key
	I0127 01:48:42.377633  905591 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20316-897624/.minikube/ca.crt ...
	I0127 01:48:42.377671  905591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/ca.crt: {Name:mkb2a576576b8dab4680882e3f8dcc747847d0f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 01:48:42.377840  905591 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20316-897624/.minikube/ca.key ...
	I0127 01:48:42.377851  905591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/ca.key: {Name:mk1caeacc1d3f725ccbd06b3843a8047db26ef25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 01:48:42.377929  905591 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20316-897624/.minikube/proxy-client-ca.key
	I0127 01:48:42.515372  905591 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20316-897624/.minikube/proxy-client-ca.crt ...
	I0127 01:48:42.515405  905591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/proxy-client-ca.crt: {Name:mk161124bd324bd2cbf2628db873a485fe604ba6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 01:48:42.515567  905591 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20316-897624/.minikube/proxy-client-ca.key ...
	I0127 01:48:42.515578  905591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/proxy-client-ca.key: {Name:mk896bf79f6ad6a69ef8c515ef54e15b4343c100 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 01:48:42.515645  905591 certs.go:256] generating profile certs ...
	I0127 01:48:42.515707  905591 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/client.key
	I0127 01:48:42.515731  905591 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/client.crt with IP's: []
	I0127 01:48:42.674266  905591 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/client.crt ...
	I0127 01:48:42.674305  905591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/client.crt: {Name:mk8b721dfaadd62456c6d01a81163c995c3d2d8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 01:48:42.674499  905591 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/client.key ...
	I0127 01:48:42.674521  905591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/client.key: {Name:mkf3a866c2adffa010f14b26ffaf9bab6fefd89d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 01:48:42.674619  905591 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/apiserver.key.0202ed33
	I0127 01:48:42.674638  905591 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/apiserver.crt.0202ed33 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.61]
	I0127 01:48:42.741053  905591 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/apiserver.crt.0202ed33 ...
	I0127 01:48:42.741089  905591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/apiserver.crt.0202ed33: {Name:mk8d3a151eadf529a5e3ad3a91530f46e6a335fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 01:48:42.741254  905591 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/apiserver.key.0202ed33 ...
	I0127 01:48:42.741268  905591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/apiserver.key.0202ed33: {Name:mk1874e1cb154046990fe1f71af2401c80d340e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 01:48:42.741344  905591 certs.go:381] copying /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/apiserver.crt.0202ed33 -> /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/apiserver.crt
	I0127 01:48:42.741437  905591 certs.go:385] copying /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/apiserver.key.0202ed33 -> /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/apiserver.key
	I0127 01:48:42.741495  905591 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/proxy-client.key
	I0127 01:48:42.741515  905591 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/proxy-client.crt with IP's: []
	I0127 01:48:42.807793  905591 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/proxy-client.crt ...
	I0127 01:48:42.807831  905591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/proxy-client.crt: {Name:mk2981e740e9946f7e7f26d12261e80d11c772c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 01:48:42.807998  905591 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/proxy-client.key ...
	I0127 01:48:42.808013  905591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/proxy-client.key: {Name:mkc5ddf08fd2cf41dc30645e46597802d71cc734 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 01:48:42.808218  905591 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 01:48:42.808254  905591 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem (1078 bytes)
	I0127 01:48:42.808303  905591 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/cert.pem (1123 bytes)
	I0127 01:48:42.808334  905591 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/key.pem (1679 bytes)
	I0127 01:48:42.808983  905591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 01:48:42.836068  905591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 01:48:42.865058  905591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 01:48:42.889086  905591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 01:48:42.911454  905591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0127 01:48:42.933176  905591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 01:48:42.954735  905591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 01:48:42.976800  905591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 01:48:42.998869  905591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 01:48:43.020940  905591 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 01:48:43.036562  905591 ssh_runner.go:195] Run: openssl version
	I0127 01:48:43.041933  905591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 01:48:43.052208  905591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 01:48:43.056382  905591 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 01:48 /usr/share/ca-certificates/minikubeCA.pem
	I0127 01:48:43.056444  905591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 01:48:43.061955  905591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 01:48:43.072239  905591 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 01:48:43.075837  905591 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 01:48:43.075895  905591 kubeadm.go:392] StartCluster: {Name:addons-903003 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-903003 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.61 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 01:48:43.076007  905591 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 01:48:43.076067  905591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 01:48:43.111736  905591 cri.go:89] found id: ""
	I0127 01:48:43.111817  905591 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 01:48:43.121269  905591 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 01:48:43.130789  905591 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 01:48:43.141425  905591 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 01:48:43.141448  905591 kubeadm.go:157] found existing configuration files:
	
	I0127 01:48:43.141506  905591 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 01:48:43.149860  905591 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 01:48:43.149934  905591 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 01:48:43.158333  905591 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 01:48:43.166588  905591 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 01:48:43.166642  905591 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 01:48:43.175229  905591 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 01:48:43.183605  905591 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 01:48:43.183660  905591 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 01:48:43.192443  905591 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 01:48:43.200854  905591 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 01:48:43.200911  905591 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 01:48:43.209857  905591 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 01:48:43.262871  905591 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 01:48:43.262982  905591 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 01:48:43.363350  905591 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 01:48:43.363532  905591 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 01:48:43.363678  905591 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 01:48:43.377445  905591 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 01:48:43.491776  905591 out.go:235]   - Generating certificates and keys ...
	I0127 01:48:43.491927  905591 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 01:48:43.492012  905591 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 01:48:43.492098  905591 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 01:48:43.707254  905591 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0127 01:48:43.837025  905591 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0127 01:48:44.055803  905591 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0127 01:48:44.216993  905591 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0127 01:48:44.217144  905591 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-903003 localhost] and IPs [192.168.39.61 127.0.0.1 ::1]
	I0127 01:48:44.312157  905591 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0127 01:48:44.312387  905591 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-903003 localhost] and IPs [192.168.39.61 127.0.0.1 ::1]
	I0127 01:48:44.422829  905591 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 01:48:44.557408  905591 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 01:48:44.650890  905591 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0127 01:48:44.650979  905591 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 01:48:45.089967  905591 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 01:48:45.210778  905591 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 01:48:45.443477  905591 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 01:48:45.505117  905591 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 01:48:45.713154  905591 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 01:48:45.714039  905591 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 01:48:45.718370  905591 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 01:48:45.720254  905591 out.go:235]   - Booting up control plane ...
	I0127 01:48:45.720376  905591 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 01:48:45.720492  905591 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 01:48:45.720751  905591 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 01:48:45.750921  905591 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 01:48:45.760996  905591 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 01:48:45.761054  905591 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 01:48:45.879302  905591 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 01:48:45.879432  905591 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 01:48:46.379724  905591 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 500.954115ms
	I0127 01:48:46.379875  905591 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 01:48:50.879286  905591 kubeadm.go:310] [api-check] The API server is healthy after 4.501952437s
	I0127 01:48:50.891588  905591 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 01:48:50.904515  905591 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 01:48:50.931649  905591 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 01:48:50.931878  905591 kubeadm.go:310] [mark-control-plane] Marking the node addons-903003 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 01:48:50.941561  905591 kubeadm.go:310] [bootstrap-token] Using token: zgg5aa.5zfstgm90yk9bits
	I0127 01:48:50.942815  905591 out.go:235]   - Configuring RBAC rules ...
	I0127 01:48:50.942976  905591 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 01:48:50.947000  905591 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 01:48:50.953154  905591 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 01:48:50.958506  905591 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 01:48:50.961300  905591 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 01:48:50.964454  905591 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 01:48:51.285673  905591 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 01:48:51.714412  905591 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 01:48:52.336722  905591 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 01:48:52.338130  905591 kubeadm.go:310] 
	I0127 01:48:52.338222  905591 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 01:48:52.338233  905591 kubeadm.go:310] 
	I0127 01:48:52.338370  905591 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 01:48:52.338380  905591 kubeadm.go:310] 
	I0127 01:48:52.338414  905591 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 01:48:52.338503  905591 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 01:48:52.338584  905591 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 01:48:52.338591  905591 kubeadm.go:310] 
	I0127 01:48:52.338638  905591 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 01:48:52.338645  905591 kubeadm.go:310] 
	I0127 01:48:52.338702  905591 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 01:48:52.338711  905591 kubeadm.go:310] 
	I0127 01:48:52.338783  905591 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 01:48:52.338881  905591 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 01:48:52.338940  905591 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 01:48:52.338946  905591 kubeadm.go:310] 
	I0127 01:48:52.339010  905591 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 01:48:52.339090  905591 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 01:48:52.339097  905591 kubeadm.go:310] 
	I0127 01:48:52.339185  905591 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token zgg5aa.5zfstgm90yk9bits \
	I0127 01:48:52.339320  905591 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9e03cefec8c3986c8071c28cf6fb7c7bbd45830c579dcdbae8e6ec1676091320 \
	I0127 01:48:52.339344  905591 kubeadm.go:310] 	--control-plane 
	I0127 01:48:52.339347  905591 kubeadm.go:310] 
	I0127 01:48:52.339453  905591 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 01:48:52.339464  905591 kubeadm.go:310] 
	I0127 01:48:52.339579  905591 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token zgg5aa.5zfstgm90yk9bits \
	I0127 01:48:52.339689  905591 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9e03cefec8c3986c8071c28cf6fb7c7bbd45830c579dcdbae8e6ec1676091320 
	I0127 01:48:52.340715  905591 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 01:48:52.340742  905591 cni.go:84] Creating CNI manager for ""
	I0127 01:48:52.340756  905591 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 01:48:52.343133  905591 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 01:48:52.344351  905591 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 01:48:52.358757  905591 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 01:48:52.377987  905591 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 01:48:52.378122  905591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 01:48:52.378141  905591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-903003 minikube.k8s.io/updated_at=2025_01_27T01_48_52_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=6bb462d349d93b9bf1c5a4f87817e5e9ea11cc95 minikube.k8s.io/name=addons-903003 minikube.k8s.io/primary=true
	I0127 01:48:52.502325  905591 ops.go:34] apiserver oom_adj: -16
	I0127 01:48:52.502502  905591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 01:48:53.002966  905591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 01:48:53.503288  905591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 01:48:54.003343  905591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 01:48:54.503078  905591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 01:48:55.003267  905591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 01:48:55.503181  905591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 01:48:56.003334  905591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 01:48:56.502655  905591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 01:48:56.586187  905591 kubeadm.go:1113] duration metric: took 4.208155898s to wait for elevateKubeSystemPrivileges
	I0127 01:48:56.586245  905591 kubeadm.go:394] duration metric: took 13.510352373s to StartCluster
	I0127 01:48:56.586274  905591 settings.go:142] acquiring lock: {Name:mk329e04da29656a59534015ea2d4cccfe5debac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 01:48:56.586407  905591 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20316-897624/kubeconfig
	I0127 01:48:56.586927  905591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/kubeconfig: {Name:mkcd87672341028e989830590061bc5270733978 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 01:48:56.587198  905591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0127 01:48:56.587240  905591 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.61 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 01:48:56.587301  905591 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0127 01:48:56.587443  905591 addons.go:69] Setting yakd=true in profile "addons-903003"
	I0127 01:48:56.587475  905591 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-903003"
	I0127 01:48:56.587480  905591 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-903003"
	I0127 01:48:56.587501  905591 config.go:182] Loaded profile config "addons-903003": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 01:48:56.587514  905591 addons.go:69] Setting cloud-spanner=true in profile "addons-903003"
	I0127 01:48:56.587530  905591 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-903003"
	I0127 01:48:56.587523  905591 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-903003"
	I0127 01:48:56.587572  905591 host.go:66] Checking if "addons-903003" exists ...
	I0127 01:48:56.587587  905591 addons.go:238] Setting addon yakd=true in "addons-903003"
	I0127 01:48:56.587599  905591 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-903003"
	I0127 01:48:56.587622  905591 host.go:66] Checking if "addons-903003" exists ...
	I0127 01:48:56.587633  905591 host.go:66] Checking if "addons-903003" exists ...
	I0127 01:48:56.587655  905591 addons.go:69] Setting ingress-dns=true in profile "addons-903003"
	I0127 01:48:56.587679  905591 addons.go:69] Setting inspektor-gadget=true in profile "addons-903003"
	I0127 01:48:56.587718  905591 addons.go:69] Setting volcano=true in profile "addons-903003"
	I0127 01:48:56.587719  905591 addons.go:69] Setting volumesnapshots=true in profile "addons-903003"
	I0127 01:48:56.587755  905591 addons.go:238] Setting addon volumesnapshots=true in "addons-903003"
	I0127 01:48:56.587758  905591 addons.go:238] Setting addon inspektor-gadget=true in "addons-903003"
	I0127 01:48:56.587457  905591 addons.go:69] Setting default-storageclass=true in profile "addons-903003"
	I0127 01:48:56.587814  905591 host.go:66] Checking if "addons-903003" exists ...
	I0127 01:48:56.587824  905591 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-903003"
	I0127 01:48:56.587657  905591 addons.go:238] Setting addon cloud-spanner=true in "addons-903003"
	I0127 01:48:56.587907  905591 host.go:66] Checking if "addons-903003" exists ...
	I0127 01:48:56.588116  905591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 01:48:56.588116  905591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 01:48:56.588154  905591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 01:48:56.588164  905591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 01:48:56.588229  905591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 01:48:56.588269  905591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 01:48:56.587687  905591 addons.go:69] Setting registry=true in profile "addons-903003"
	I0127 01:48:56.588308  905591 addons.go:238] Setting addon registry=true in "addons-903003"
	I0127 01:48:56.587671  905591 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-903003"
	I0127 01:48:56.588343  905591 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-903003"
	I0127 01:48:56.588348  905591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 01:48:56.588356  905591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 01:48:56.588377  905591 host.go:66] Checking if "addons-903003" exists ...
	I0127 01:48:56.588381  905591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 01:48:56.588389  905591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 01:48:56.588423  905591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 01:48:56.588439  905591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 01:48:56.587686  905591 addons.go:69] Setting metrics-server=true in profile "addons-903003"
	I0127 01:48:56.588569  905591 addons.go:238] Setting addon metrics-server=true in "addons-903003"
	I0127 01:48:56.587689  905591 addons.go:238] Setting addon ingress-dns=true in "addons-903003"
	I0127 01:48:56.587697  905591 addons.go:69] Setting storage-provisioner=true in profile "addons-903003"
	I0127 01:48:56.588654  905591 addons.go:238] Setting addon storage-provisioner=true in "addons-903003"
	I0127 01:48:56.587698  905591 addons.go:69] Setting gcp-auth=true in profile "addons-903003"
	I0127 01:48:56.588686  905591 mustload.go:65] Loading cluster: addons-903003
	I0127 01:48:56.587667  905591 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-903003"
	I0127 01:48:56.587763  905591 addons.go:238] Setting addon volcano=true in "addons-903003"
	I0127 01:48:56.587792  905591 host.go:66] Checking if "addons-903003" exists ...
	I0127 01:48:56.588756  905591 host.go:66] Checking if "addons-903003" exists ...
	I0127 01:48:56.588908  905591 config.go:182] Loaded profile config "addons-903003": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 01:48:56.589158  905591 host.go:66] Checking if "addons-903003" exists ...
	I0127 01:48:56.589317  905591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 01:48:56.589320  905591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 01:48:56.589349  905591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 01:48:56.589365  905591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 01:48:56.589392  905591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 01:48:56.587707  905591 addons.go:69] Setting ingress=true in profile "addons-903003"
	I0127 01:48:56.589416  905591 addons.go:238] Setting addon ingress=true in "addons-903003"
	I0127 01:48:56.589456  905591 host.go:66] Checking if "addons-903003" exists ...
	I0127 01:48:56.589893  905591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 01:48:56.589939  905591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 01:48:56.589960  905591 host.go:66] Checking if "addons-903003" exists ...
	I0127 01:48:56.590027  905591 host.go:66] Checking if "addons-903003" exists ...
	I0127 01:48:56.590165  905591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 01:48:56.590217  905591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 01:48:56.590245  905591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 01:48:56.590478  905591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 01:48:56.590528  905591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 01:48:56.590910  905591 host.go:66] Checking if "addons-903003" exists ...
	I0127 01:48:56.590947  905591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 01:48:56.591004  905591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 01:48:56.592810  905591 out.go:177] * Verifying Kubernetes components...
	I0127 01:48:56.634293  905591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34503
	I0127 01:48:56.634340  905591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38507
	I0127 01:48:56.634478  905591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33169
	I0127 01:48:56.634502  905591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34615
	I0127 01:48:56.634590  905591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37127
	I0127 01:48:56.634592  905591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 01:48:56.634949  905591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 01:48:56.634995  905591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 01:48:56.636611  905591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 01:48:56.636657  905591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 01:48:56.636702  905591 main.go:141] libmachine: () Calling .GetVersion
	I0127 01:48:56.636834  905591 main.go:141] libmachine: () Calling .GetVersion
	I0127 01:48:56.636943  905591 main.go:141] libmachine: () Calling .GetVersion
	I0127 01:48:56.637093  905591 main.go:141] libmachine: () Calling .GetVersion
	I0127 01:48:56.637192  905591 main.go:141] libmachine: () Calling .GetVersion
	I0127 01:48:56.637702  905591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 01:48:56.637755  905591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 01:48:56.638033  905591 main.go:141] libmachine: Using API Version  1
	I0127 01:48:56.638051  905591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 01:48:56.638112  905591 main.go:141] libmachine: Using API Version  1
	I0127 01:48:56.638129  905591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 01:48:56.638661  905591 main.go:141] libmachine: Using API Version  1
	I0127 01:48:56.638681  905591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 01:48:56.638745  905591 main.go:141] libmachine: () Calling .GetMachineName
	I0127 01:48:56.638915  905591 main.go:141] libmachine: Using API Version  1
	I0127 01:48:56.638918  905591 main.go:141] libmachine: () Calling .GetMachineName
	I0127 01:48:56.638955  905591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 01:48:56.639083  905591 main.go:141] libmachine: Using API Version  1
	I0127 01:48:56.639121  905591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 01:48:56.639427  905591 main.go:141] libmachine: () Calling .GetMachineName
	I0127 01:48:56.639493  905591 main.go:141] libmachine: () Calling .GetMachineName
	I0127 01:48:56.639498  905591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 01:48:56.639535  905591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 01:48:56.639973  905591 main.go:141] libmachine: () Calling .GetMachineName
	I0127 01:48:56.640137  905591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 01:48:56.640177  905591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 01:48:56.640264  905591 main.go:141] libmachine: (addons-903003) Calling .GetState
	I0127 01:48:56.640388  905591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 01:48:56.640429  905591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 01:48:56.641091  905591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 01:48:56.641142  905591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 01:48:56.644843  905591 addons.go:238] Setting addon default-storageclass=true in "addons-903003"
	I0127 01:48:56.644897  905591 host.go:66] Checking if "addons-903003" exists ...
	I0127 01:48:56.645339  905591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 01:48:56.645371  905591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 01:48:56.678802  905591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46039
	I0127 01:48:56.678922  905591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36697
	I0127 01:48:56.679347  905591 main.go:141] libmachine: () Calling .GetVersion
	I0127 01:48:56.679403  905591 main.go:141] libmachine: () Calling .GetVersion
	I0127 01:48:56.680036  905591 main.go:141] libmachine: Using API Version  1
	I0127 01:48:56.680059  905591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 01:48:56.680177  905591 main.go:141] libmachine: Using API Version  1
	I0127 01:48:56.680193  905591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 01:48:56.680587  905591 main.go:141] libmachine: () Calling .GetMachineName
	I0127 01:48:56.680836  905591 main.go:141] libmachine: (addons-903003) Calling .GetState
	I0127 01:48:56.682429  905591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42755
	I0127 01:48:56.682437  905591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33213
	I0127 01:48:56.682602  905591 main.go:141] libmachine: () Calling .GetMachineName
	I0127 01:48:56.682889  905591 main.go:141] libmachine: () Calling .GetVersion
	I0127 01:48:56.683066  905591 main.go:141] libmachine: () Calling .GetVersion
	I0127 01:48:56.683584  905591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 01:48:56.683608  905591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 01:48:56.683821  905591 main.go:141] libmachine: Using API Version  1
	I0127 01:48:56.683831  905591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42257
	I0127 01:48:56.683846  905591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 01:48:56.684057  905591 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-903003"
	I0127 01:48:56.684106  905591 host.go:66] Checking if "addons-903003" exists ...
	I0127 01:48:56.684319  905591 main.go:141] libmachine: () Calling .GetMachineName
	I0127 01:48:56.684473  905591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 01:48:56.684522  905591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 01:48:56.684868  905591 main.go:141] libmachine: Using API Version  1
	I0127 01:48:56.684900  905591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 01:48:56.685364  905591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 01:48:56.685399  905591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 01:48:56.685960  905591 main.go:141] libmachine: () Calling .GetMachineName
	I0127 01:48:56.686028  905591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36603
	I0127 01:48:56.686097  905591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33829
	I0127 01:48:56.686628  905591 main.go:141] libmachine: () Calling .GetVersion
	I0127 01:48:56.686664  905591 main.go:141] libmachine: () Calling .GetVersion
	I0127 01:48:56.686775  905591 main.go:141] libmachine: () Calling .GetVersion
	I0127 01:48:56.686965  905591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 01:48:56.686987  905591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 01:48:56.687352  905591 main.go:141] libmachine: Using API Version  1
	I0127 01:48:56.687369  905591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 01:48:56.687529  905591 main.go:141] libmachine: Using API Version  1
	I0127 01:48:56.687542  905591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 01:48:56.687588  905591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43395
	I0127 01:48:56.687588  905591 main.go:141] libmachine: Using API Version  1
	I0127 01:48:56.687603  905591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 01:48:56.687761  905591 main.go:141] libmachine: () Calling .GetMachineName
	I0127 01:48:56.687847  905591 main.go:141] libmachine: () Calling .GetMachineName
	I0127 01:48:56.688007  905591 main.go:141] libmachine: () Calling .GetMachineName
	I0127 01:48:56.688102  905591 main.go:141] libmachine: () Calling .GetVersion
	I0127 01:48:56.688162  905591 main.go:141] libmachine: (addons-903003) Calling .GetState
	I0127 01:48:56.688540  905591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 01:48:56.688577  905591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 01:48:56.688614  905591 main.go:141] libmachine: Using API Version  1
	I0127 01:48:56.688626  905591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 01:48:56.689038  905591 main.go:141] libmachine: () Calling .GetMachineName
	I0127 01:48:56.689180  905591 main.go:141] libmachine: (addons-903003) Calling .GetState
	I0127 01:48:56.689243  905591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 01:48:56.689276  905591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 01:48:56.690295  905591 main.go:141] libmachine: (addons-903003) Calling .DriverName
	I0127 01:48:56.691937  905591 main.go:141] libmachine: (addons-903003) Calling .DriverName
	I0127 01:48:56.692175  905591 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0127 01:48:56.694611  905591 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0127 01:48:56.694683  905591 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0127 01:48:56.694696  905591 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0127 01:48:56.694718  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHHostname
	I0127 01:48:56.694811  905591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41553
	I0127 01:48:56.695456  905591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35053
	I0127 01:48:56.695922  905591 main.go:141] libmachine: () Calling .GetVersion
	I0127 01:48:56.696453  905591 main.go:141] libmachine: () Calling .GetVersion
	I0127 01:48:56.696525  905591 main.go:141] libmachine: Using API Version  1
	I0127 01:48:56.696550  905591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 01:48:56.696677  905591 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0127 01:48:56.696696  905591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0127 01:48:56.696715  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHHostname
	I0127 01:48:56.696978  905591 main.go:141] libmachine: () Calling .GetMachineName
	I0127 01:48:56.697614  905591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 01:48:56.697645  905591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 01:48:56.697936  905591 main.go:141] libmachine: Using API Version  1
	I0127 01:48:56.697959  905591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 01:48:56.698446  905591 main.go:141] libmachine: () Calling .GetMachineName
	I0127 01:48:56.698700  905591 main.go:141] libmachine: (addons-903003) Calling .GetState
	I0127 01:48:56.699184  905591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41775
	I0127 01:48:56.699725  905591 main.go:141] libmachine: () Calling .GetVersion
	I0127 01:48:56.700359  905591 main.go:141] libmachine: Using API Version  1
	I0127 01:48:56.700386  905591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 01:48:56.700761  905591 main.go:141] libmachine: () Calling .GetMachineName
	I0127 01:48:56.701107  905591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42557
	I0127 01:48:56.701272  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:56.701322  905591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37087
	I0127 01:48:56.701449  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:56.701617  905591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 01:48:56.701664  905591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 01:48:56.702201  905591 main.go:141] libmachine: (addons-903003) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:89:fd", ip: ""} in network mk-addons-903003: {Iface:virbr1 ExpiryTime:2025-01-27 02:48:24 +0000 UTC Type:0 Mac:52:54:00:2d:89:fd Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:addons-903003 Clientid:01:52:54:00:2d:89:fd}
	I0127 01:48:56.702221  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined IP address 192.168.39.61 and MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:56.702255  905591 main.go:141] libmachine: () Calling .GetVersion
	I0127 01:48:56.702326  905591 main.go:141] libmachine: () Calling .GetVersion
	I0127 01:48:56.702380  905591 main.go:141] libmachine: (addons-903003) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:89:fd", ip: ""} in network mk-addons-903003: {Iface:virbr1 ExpiryTime:2025-01-27 02:48:24 +0000 UTC Type:0 Mac:52:54:00:2d:89:fd Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:addons-903003 Clientid:01:52:54:00:2d:89:fd}
	I0127 01:48:56.702400  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined IP address 192.168.39.61 and MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:56.702442  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHPort
	I0127 01:48:56.702492  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHPort
	I0127 01:48:56.702754  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHKeyPath
	I0127 01:48:56.702817  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHKeyPath
	I0127 01:48:56.702877  905591 main.go:141] libmachine: Using API Version  1
	I0127 01:48:56.702902  905591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 01:48:56.703020  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHUsername
	I0127 01:48:56.703078  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHUsername
	I0127 01:48:56.703266  905591 main.go:141] libmachine: () Calling .GetMachineName
	I0127 01:48:56.703285  905591 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/addons-903003/id_rsa Username:docker}
	I0127 01:48:56.703246  905591 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/addons-903003/id_rsa Username:docker}
	I0127 01:48:56.703920  905591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 01:48:56.703951  905591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 01:48:56.704110  905591 main.go:141] libmachine: (addons-903003) Calling .DriverName
	I0127 01:48:56.704275  905591 main.go:141] libmachine: Using API Version  1
	I0127 01:48:56.704296  905591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 01:48:56.704350  905591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44661
	I0127 01:48:56.705216  905591 main.go:141] libmachine: () Calling .GetVersion
	I0127 01:48:56.705318  905591 main.go:141] libmachine: () Calling .GetMachineName
	I0127 01:48:56.705866  905591 main.go:141] libmachine: Using API Version  1
	I0127 01:48:56.705894  905591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 01:48:56.706137  905591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 01:48:56.706187  905591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 01:48:56.706240  905591 main.go:141] libmachine: () Calling .GetMachineName
	I0127 01:48:56.706446  905591 main.go:141] libmachine: (addons-903003) Calling .GetState
	I0127 01:48:56.707387  905591 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0127 01:48:56.707441  905591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36107
	I0127 01:48:56.708082  905591 main.go:141] libmachine: () Calling .GetVersion
	I0127 01:48:56.708234  905591 main.go:141] libmachine: (addons-903003) Calling .DriverName
	I0127 01:48:56.709051  905591 main.go:141] libmachine: Using API Version  1
	I0127 01:48:56.709077  905591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 01:48:56.709559  905591 main.go:141] libmachine: () Calling .GetMachineName
	I0127 01:48:56.709690  905591 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0127 01:48:56.709752  905591 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.28
	I0127 01:48:56.709948  905591 main.go:141] libmachine: (addons-903003) Calling .GetState
	I0127 01:48:56.711344  905591 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0127 01:48:56.711366  905591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0127 01:48:56.711388  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHHostname
	I0127 01:48:56.711703  905591 host.go:66] Checking if "addons-903003" exists ...
	I0127 01:48:56.712156  905591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 01:48:56.712231  905591 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0127 01:48:56.712235  905591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 01:48:56.714923  905591 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0127 01:48:56.715241  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:56.715719  905591 main.go:141] libmachine: (addons-903003) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:89:fd", ip: ""} in network mk-addons-903003: {Iface:virbr1 ExpiryTime:2025-01-27 02:48:24 +0000 UTC Type:0 Mac:52:54:00:2d:89:fd Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:addons-903003 Clientid:01:52:54:00:2d:89:fd}
	I0127 01:48:56.715814  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined IP address 192.168.39.61 and MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:56.716102  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHPort
	I0127 01:48:56.716293  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHKeyPath
	I0127 01:48:56.716507  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHUsername
	I0127 01:48:56.716683  905591 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/addons-903003/id_rsa Username:docker}
	I0127 01:48:56.717080  905591 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0127 01:48:56.718265  905591 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0127 01:48:56.719702  905591 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0127 01:48:56.721003  905591 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0127 01:48:56.721474  905591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43563
	I0127 01:48:56.722003  905591 main.go:141] libmachine: () Calling .GetVersion
	I0127 01:48:56.722170  905591 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0127 01:48:56.722195  905591 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0127 01:48:56.722221  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHHostname
	I0127 01:48:56.722661  905591 main.go:141] libmachine: Using API Version  1
	I0127 01:48:56.722688  905591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 01:48:56.724623  905591 main.go:141] libmachine: () Calling .GetMachineName
	I0127 01:48:56.725510  905591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 01:48:56.725577  905591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 01:48:56.725829  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:56.726017  905591 main.go:141] libmachine: (addons-903003) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:89:fd", ip: ""} in network mk-addons-903003: {Iface:virbr1 ExpiryTime:2025-01-27 02:48:24 +0000 UTC Type:0 Mac:52:54:00:2d:89:fd Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:addons-903003 Clientid:01:52:54:00:2d:89:fd}
	I0127 01:48:56.726046  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined IP address 192.168.39.61 and MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:56.726220  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHPort
	I0127 01:48:56.726380  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHKeyPath
	I0127 01:48:56.726535  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHUsername
	I0127 01:48:56.726673  905591 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/addons-903003/id_rsa Username:docker}
	I0127 01:48:56.727310  905591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38167
	I0127 01:48:56.727833  905591 main.go:141] libmachine: () Calling .GetVersion
	I0127 01:48:56.728377  905591 main.go:141] libmachine: Using API Version  1
	I0127 01:48:56.728408  905591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 01:48:56.728784  905591 main.go:141] libmachine: () Calling .GetMachineName
	I0127 01:48:56.729080  905591 main.go:141] libmachine: (addons-903003) Calling .GetState
	I0127 01:48:56.730813  905591 main.go:141] libmachine: (addons-903003) Calling .DriverName
	I0127 01:48:56.731050  905591 main.go:141] libmachine: Making call to close driver server
	I0127 01:48:56.731067  905591 main.go:141] libmachine: (addons-903003) Calling .Close
	I0127 01:48:56.733097  905591 main.go:141] libmachine: Successfully made call to close driver server
	I0127 01:48:56.733116  905591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 01:48:56.733124  905591 main.go:141] libmachine: Making call to close driver server
	I0127 01:48:56.733132  905591 main.go:141] libmachine: (addons-903003) Calling .Close
	I0127 01:48:56.733096  905591 main.go:141] libmachine: (addons-903003) DBG | Closing plugin on server side
	I0127 01:48:56.733402  905591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33523
	I0127 01:48:56.733471  905591 main.go:141] libmachine: Successfully made call to close driver server
	I0127 01:48:56.733484  905591 main.go:141] libmachine: Making call to close connection to plugin binary
	W0127 01:48:56.733581  905591 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0127 01:48:56.733835  905591 main.go:141] libmachine: () Calling .GetVersion
	I0127 01:48:56.734342  905591 main.go:141] libmachine: Using API Version  1
	I0127 01:48:56.734366  905591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 01:48:56.734711  905591 main.go:141] libmachine: () Calling .GetMachineName
	I0127 01:48:56.734927  905591 main.go:141] libmachine: (addons-903003) Calling .GetState
	I0127 01:48:56.736706  905591 main.go:141] libmachine: (addons-903003) Calling .DriverName
	I0127 01:48:56.738447  905591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37949
	I0127 01:48:56.738636  905591 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0127 01:48:56.739115  905591 main.go:141] libmachine: () Calling .GetVersion
	I0127 01:48:56.739602  905591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40971
	I0127 01:48:56.739894  905591 main.go:141] libmachine: Using API Version  1
	I0127 01:48:56.739908  905591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 01:48:56.740257  905591 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 01:48:56.740274  905591 main.go:141] libmachine: () Calling .GetMachineName
	I0127 01:48:56.740279  905591 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 01:48:56.740324  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHHostname
	I0127 01:48:56.740326  905591 main.go:141] libmachine: () Calling .GetVersion
	I0127 01:48:56.740550  905591 main.go:141] libmachine: (addons-903003) Calling .GetState
	I0127 01:48:56.740918  905591 main.go:141] libmachine: Using API Version  1
	I0127 01:48:56.740967  905591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 01:48:56.741321  905591 main.go:141] libmachine: () Calling .GetMachineName
	I0127 01:48:56.741491  905591 main.go:141] libmachine: (addons-903003) Calling .DriverName
	I0127 01:48:56.743277  905591 main.go:141] libmachine: (addons-903003) Calling .DriverName
	I0127 01:48:56.743976  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:56.744444  905591 main.go:141] libmachine: (addons-903003) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:89:fd", ip: ""} in network mk-addons-903003: {Iface:virbr1 ExpiryTime:2025-01-27 02:48:24 +0000 UTC Type:0 Mac:52:54:00:2d:89:fd Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:addons-903003 Clientid:01:52:54:00:2d:89:fd}
	I0127 01:48:56.744465  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined IP address 192.168.39.61 and MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:56.745038  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHPort
	I0127 01:48:56.745240  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHKeyPath
	I0127 01:48:56.745409  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHUsername
	I0127 01:48:56.745523  905591 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/addons-903003/id_rsa Username:docker}
	I0127 01:48:56.746116  905591 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0127 01:48:56.747426  905591 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0127 01:48:56.747449  905591 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0127 01:48:56.747470  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHHostname
	I0127 01:48:56.750927  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:56.751478  905591 main.go:141] libmachine: (addons-903003) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:89:fd", ip: ""} in network mk-addons-903003: {Iface:virbr1 ExpiryTime:2025-01-27 02:48:24 +0000 UTC Type:0 Mac:52:54:00:2d:89:fd Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:addons-903003 Clientid:01:52:54:00:2d:89:fd}
	I0127 01:48:56.751509  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined IP address 192.168.39.61 and MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:56.751754  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHPort
	I0127 01:48:56.751937  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHKeyPath
	I0127 01:48:56.752064  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHUsername
	I0127 01:48:56.752224  905591 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/addons-903003/id_rsa Username:docker}
	I0127 01:48:56.753957  905591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38013
	I0127 01:48:56.754603  905591 main.go:141] libmachine: () Calling .GetVersion
	I0127 01:48:56.755219  905591 main.go:141] libmachine: Using API Version  1
	I0127 01:48:56.755239  905591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 01:48:56.755647  905591 main.go:141] libmachine: () Calling .GetMachineName
	I0127 01:48:56.755843  905591 main.go:141] libmachine: (addons-903003) Calling .GetState
	I0127 01:48:56.756432  905591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35103
	I0127 01:48:56.756966  905591 main.go:141] libmachine: () Calling .GetVersion
	I0127 01:48:56.757630  905591 main.go:141] libmachine: Using API Version  1
	I0127 01:48:56.757648  905591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 01:48:56.757719  905591 main.go:141] libmachine: (addons-903003) Calling .DriverName
	I0127 01:48:56.758399  905591 main.go:141] libmachine: () Calling .GetMachineName
	I0127 01:48:56.758600  905591 main.go:141] libmachine: (addons-903003) Calling .GetState
	I0127 01:48:56.759524  905591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32907
	I0127 01:48:56.759763  905591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42113
	I0127 01:48:56.760017  905591 main.go:141] libmachine: () Calling .GetVersion
	I0127 01:48:56.760236  905591 main.go:141] libmachine: () Calling .GetVersion
	I0127 01:48:56.760421  905591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39063
	I0127 01:48:56.760607  905591 main.go:141] libmachine: Using API Version  1
	I0127 01:48:56.760629  905591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 01:48:56.760725  905591 main.go:141] libmachine: (addons-903003) Calling .DriverName
	I0127 01:48:56.760884  905591 main.go:141] libmachine: Using API Version  1
	I0127 01:48:56.760901  905591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 01:48:56.761092  905591 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0127 01:48:56.761273  905591 main.go:141] libmachine: () Calling .GetVersion
	I0127 01:48:56.761562  905591 main.go:141] libmachine: () Calling .GetMachineName
	I0127 01:48:56.761649  905591 main.go:141] libmachine: () Calling .GetMachineName
	I0127 01:48:56.762079  905591 main.go:141] libmachine: (addons-903003) Calling .GetState
	I0127 01:48:56.762085  905591 main.go:141] libmachine: (addons-903003) Calling .GetState
	I0127 01:48:56.762170  905591 main.go:141] libmachine: Using API Version  1
	I0127 01:48:56.762184  905591 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0127 01:48:56.762192  905591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 01:48:56.762352  905591 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0127 01:48:56.762365  905591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0127 01:48:56.762391  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHHostname
	I0127 01:48:56.762587  905591 main.go:141] libmachine: () Calling .GetMachineName
	I0127 01:48:56.763229  905591 main.go:141] libmachine: (addons-903003) Calling .GetState
	I0127 01:48:56.764083  905591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38471
	I0127 01:48:56.764413  905591 out.go:177]   - Using image docker.io/registry:2.8.3
	I0127 01:48:56.764421  905591 main.go:141] libmachine: (addons-903003) Calling .DriverName
	I0127 01:48:56.764760  905591 main.go:141] libmachine: (addons-903003) Calling .DriverName
	I0127 01:48:56.764909  905591 main.go:141] libmachine: () Calling .GetVersion
	I0127 01:48:56.765432  905591 main.go:141] libmachine: Using API Version  1
	I0127 01:48:56.765628  905591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 01:48:56.766023  905591 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0127 01:48:56.766142  905591 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0127 01:48:56.766496  905591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0127 01:48:56.766519  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHHostname
	I0127 01:48:56.766340  905591 main.go:141] libmachine: (addons-903003) Calling .DriverName
	I0127 01:48:56.766386  905591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42717
	I0127 01:48:56.766413  905591 main.go:141] libmachine: () Calling .GetMachineName
	I0127 01:48:56.766902  905591 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0127 01:48:56.766959  905591 main.go:141] libmachine: (addons-903003) Calling .GetState
	I0127 01:48:56.768015  905591 main.go:141] libmachine: () Calling .GetVersion
	I0127 01:48:56.768115  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:56.768372  905591 main.go:141] libmachine: (addons-903003) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:89:fd", ip: ""} in network mk-addons-903003: {Iface:virbr1 ExpiryTime:2025-01-27 02:48:24 +0000 UTC Type:0 Mac:52:54:00:2d:89:fd Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:addons-903003 Clientid:01:52:54:00:2d:89:fd}
	I0127 01:48:56.768389  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined IP address 192.168.39.61 and MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:56.768486  905591 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 01:48:56.768551  905591 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0127 01:48:56.768564  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHPort
	I0127 01:48:56.768569  905591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0127 01:48:56.768586  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHHostname
	I0127 01:48:56.768837  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHKeyPath
	I0127 01:48:56.769041  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHUsername
	I0127 01:48:56.769175  905591 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/addons-903003/id_rsa Username:docker}
	I0127 01:48:56.769808  905591 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 01:48:56.769828  905591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 01:48:56.769846  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHHostname
	I0127 01:48:56.772403  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:56.772429  905591 main.go:141] libmachine: (addons-903003) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:89:fd", ip: ""} in network mk-addons-903003: {Iface:virbr1 ExpiryTime:2025-01-27 02:48:24 +0000 UTC Type:0 Mac:52:54:00:2d:89:fd Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:addons-903003 Clientid:01:52:54:00:2d:89:fd}
	I0127 01:48:56.772454  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined IP address 192.168.39.61 and MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:56.772459  905591 main.go:141] libmachine: (addons-903003) Calling .DriverName
	I0127 01:48:56.772566  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHPort
	I0127 01:48:56.772622  905591 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0127 01:48:56.772860  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHKeyPath
	I0127 01:48:56.773063  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHUsername
	I0127 01:48:56.773665  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:56.773675  905591 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/addons-903003/id_rsa Username:docker}
	I0127 01:48:56.773703  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:56.773770  905591 main.go:141] libmachine: Using API Version  1
	I0127 01:48:56.773786  905591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 01:48:56.774053  905591 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.36.0
	I0127 01:48:56.774335  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHPort
	I0127 01:48:56.774336  905591 main.go:141] libmachine: (addons-903003) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:89:fd", ip: ""} in network mk-addons-903003: {Iface:virbr1 ExpiryTime:2025-01-27 02:48:24 +0000 UTC Type:0 Mac:52:54:00:2d:89:fd Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:addons-903003 Clientid:01:52:54:00:2d:89:fd}
	I0127 01:48:56.774375  905591 main.go:141] libmachine: () Calling .GetMachineName
	I0127 01:48:56.774393  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined IP address 192.168.39.61 and MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:56.774423  905591 main.go:141] libmachine: (addons-903003) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:89:fd", ip: ""} in network mk-addons-903003: {Iface:virbr1 ExpiryTime:2025-01-27 02:48:24 +0000 UTC Type:0 Mac:52:54:00:2d:89:fd Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:addons-903003 Clientid:01:52:54:00:2d:89:fd}
	I0127 01:48:56.774444  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined IP address 192.168.39.61 and MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:56.774607  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHPort
	I0127 01:48:56.774633  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHKeyPath
	I0127 01:48:56.774819  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHKeyPath
	I0127 01:48:56.774822  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHUsername
	I0127 01:48:56.774998  905591 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/addons-903003/id_rsa Username:docker}
	I0127 01:48:56.775010  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHUsername
	I0127 01:48:56.775270  905591 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/addons-903003/id_rsa Username:docker}
	I0127 01:48:56.775388  905591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 01:48:56.775420  905591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 01:48:56.775465  905591 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0127 01:48:56.775478  905591 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0127 01:48:56.775494  905591 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0127 01:48:56.775518  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHHostname
	I0127 01:48:56.776581  905591 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0127 01:48:56.776596  905591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0127 01:48:56.776609  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHHostname
	I0127 01:48:56.779147  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:56.780066  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:56.780103  905591 main.go:141] libmachine: (addons-903003) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:89:fd", ip: ""} in network mk-addons-903003: {Iface:virbr1 ExpiryTime:2025-01-27 02:48:24 +0000 UTC Type:0 Mac:52:54:00:2d:89:fd Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:addons-903003 Clientid:01:52:54:00:2d:89:fd}
	I0127 01:48:56.780110  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHPort
	I0127 01:48:56.780119  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined IP address 192.168.39.61 and MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:56.780278  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHKeyPath
	I0127 01:48:56.780414  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHUsername
	I0127 01:48:56.780487  905591 main.go:141] libmachine: (addons-903003) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:89:fd", ip: ""} in network mk-addons-903003: {Iface:virbr1 ExpiryTime:2025-01-27 02:48:24 +0000 UTC Type:0 Mac:52:54:00:2d:89:fd Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:addons-903003 Clientid:01:52:54:00:2d:89:fd}
	I0127 01:48:56.780505  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined IP address 192.168.39.61 and MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:56.780696  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHPort
	I0127 01:48:56.780706  905591 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/addons-903003/id_rsa Username:docker}
	I0127 01:48:56.780818  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHKeyPath
	I0127 01:48:56.780985  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHUsername
	I0127 01:48:56.781122  905591 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/addons-903003/id_rsa Username:docker}
	W0127 01:48:56.784232  905591 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44990->192.168.39.61:22: read: connection reset by peer
	I0127 01:48:56.784268  905591 retry.go:31] will retry after 327.66111ms: ssh: handshake failed: read tcp 192.168.39.1:44990->192.168.39.61:22: read: connection reset by peer
	I0127 01:48:56.786484  905591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37969
	I0127 01:48:56.786917  905591 main.go:141] libmachine: () Calling .GetVersion
	I0127 01:48:56.787436  905591 main.go:141] libmachine: Using API Version  1
	I0127 01:48:56.787464  905591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 01:48:56.787826  905591 main.go:141] libmachine: () Calling .GetMachineName
	I0127 01:48:56.788043  905591 main.go:141] libmachine: (addons-903003) Calling .GetState
	I0127 01:48:56.789679  905591 main.go:141] libmachine: (addons-903003) Calling .DriverName
	I0127 01:48:56.789910  905591 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 01:48:56.789925  905591 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 01:48:56.789941  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHHostname
	I0127 01:48:56.793373  905591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35967
	I0127 01:48:56.793564  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:56.793818  905591 main.go:141] libmachine: () Calling .GetVersion
	I0127 01:48:56.793838  905591 main.go:141] libmachine: (addons-903003) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:89:fd", ip: ""} in network mk-addons-903003: {Iface:virbr1 ExpiryTime:2025-01-27 02:48:24 +0000 UTC Type:0 Mac:52:54:00:2d:89:fd Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:addons-903003 Clientid:01:52:54:00:2d:89:fd}
	I0127 01:48:56.793863  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined IP address 192.168.39.61 and MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:56.794140  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHPort
	I0127 01:48:56.794252  905591 main.go:141] libmachine: Using API Version  1
	I0127 01:48:56.794271  905591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 01:48:56.794367  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHKeyPath
	I0127 01:48:56.794503  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHUsername
	I0127 01:48:56.794604  905591 main.go:141] libmachine: () Calling .GetMachineName
	I0127 01:48:56.794620  905591 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/addons-903003/id_rsa Username:docker}
	I0127 01:48:56.794818  905591 main.go:141] libmachine: (addons-903003) Calling .GetState
	W0127 01:48:56.796105  905591 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:45004->192.168.39.61:22: read: connection reset by peer
	I0127 01:48:56.796135  905591 retry.go:31] will retry after 174.091249ms: ssh: handshake failed: read tcp 192.168.39.1:45004->192.168.39.61:22: read: connection reset by peer
	I0127 01:48:56.796343  905591 main.go:141] libmachine: (addons-903003) Calling .DriverName
	I0127 01:48:56.798349  905591 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0127 01:48:56.799459  905591 out.go:177]   - Using image docker.io/busybox:stable
	I0127 01:48:56.800695  905591 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0127 01:48:56.800707  905591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0127 01:48:56.800723  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHHostname
	I0127 01:48:56.803745  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:56.804259  905591 main.go:141] libmachine: (addons-903003) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:89:fd", ip: ""} in network mk-addons-903003: {Iface:virbr1 ExpiryTime:2025-01-27 02:48:24 +0000 UTC Type:0 Mac:52:54:00:2d:89:fd Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:addons-903003 Clientid:01:52:54:00:2d:89:fd}
	I0127 01:48:56.804290  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined IP address 192.168.39.61 and MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:48:56.804442  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHPort
	I0127 01:48:56.804610  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHKeyPath
	I0127 01:48:56.804784  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHUsername
	I0127 01:48:56.804913  905591 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/addons-903003/id_rsa Username:docker}
	I0127 01:48:57.107133  905591 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 01:48:57.107163  905591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0127 01:48:57.171358  905591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 01:48:57.189067  905591 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0127 01:48:57.189110  905591 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0127 01:48:57.189758  905591 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0127 01:48:57.189780  905591 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0127 01:48:57.206744  905591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0127 01:48:57.225276  905591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 01:48:57.225321  905591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0127 01:48:57.250363  905591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0127 01:48:57.288033  905591 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 01:48:57.288074  905591 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 01:48:57.288276  905591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0127 01:48:57.295860  905591 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0127 01:48:57.295889  905591 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0127 01:48:57.305983  905591 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0127 01:48:57.306022  905591 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0127 01:48:57.325004  905591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0127 01:48:57.327016  905591 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0127 01:48:57.327046  905591 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0127 01:48:57.345067  905591 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0127 01:48:57.345095  905591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0127 01:48:57.361987  905591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 01:48:57.394136  905591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0127 01:48:57.423429  905591 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0127 01:48:57.423459  905591 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0127 01:48:57.470845  905591 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 01:48:57.470881  905591 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 01:48:57.487364  905591 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0127 01:48:57.487402  905591 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0127 01:48:57.539207  905591 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0127 01:48:57.539246  905591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0127 01:48:57.553814  905591 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0127 01:48:57.553848  905591 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0127 01:48:57.618056  905591 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0127 01:48:57.618086  905591 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0127 01:48:57.652990  905591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0127 01:48:57.698067  905591 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0127 01:48:57.698108  905591 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0127 01:48:57.726050  905591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0127 01:48:57.728685  905591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0127 01:48:57.738548  905591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 01:48:57.755148  905591 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0127 01:48:57.755186  905591 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0127 01:48:57.863663  905591 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0127 01:48:57.863705  905591 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0127 01:48:57.982537  905591 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0127 01:48:57.982566  905591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0127 01:48:58.017092  905591 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0127 01:48:58.017125  905591 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0127 01:48:58.041269  905591 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0127 01:48:58.041302  905591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0127 01:48:58.283959  905591 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0127 01:48:58.283990  905591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0127 01:48:58.291976  905591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0127 01:48:58.363305  905591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0127 01:48:58.724124  905591 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0127 01:48:58.724166  905591 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0127 01:48:59.354604  905591 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0127 01:48:59.354629  905591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0127 01:48:59.467147  905591 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0127 01:48:59.467175  905591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0127 01:48:59.716367  905591 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0127 01:48:59.716402  905591 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0127 01:48:59.953822  905591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0127 01:49:00.843295  905591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.67189786s)
	I0127 01:49:00.843365  905591 main.go:141] libmachine: Making call to close driver server
	I0127 01:49:00.843379  905591 main.go:141] libmachine: (addons-903003) Calling .Close
	I0127 01:49:00.843375  905591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.636585149s)
	I0127 01:49:00.843418  905591 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.618064041s)
	I0127 01:49:00.843450  905591 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0127 01:49:00.843466  905591 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.618153824s)
	I0127 01:49:00.843429  905591 main.go:141] libmachine: Making call to close driver server
	I0127 01:49:00.843555  905591 main.go:141] libmachine: (addons-903003) Calling .Close
	I0127 01:49:00.843588  905591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.555286231s)
	I0127 01:49:00.843620  905591 main.go:141] libmachine: Making call to close driver server
	I0127 01:49:00.843663  905591 main.go:141] libmachine: (addons-903003) Calling .Close
	I0127 01:49:00.843558  905591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.5931594s)
	I0127 01:49:00.843747  905591 main.go:141] libmachine: Making call to close driver server
	I0127 01:49:00.843756  905591 main.go:141] libmachine: (addons-903003) Calling .Close
	I0127 01:49:00.843898  905591 main.go:141] libmachine: (addons-903003) DBG | Closing plugin on server side
	I0127 01:49:00.843909  905591 main.go:141] libmachine: Successfully made call to close driver server
	I0127 01:49:00.843933  905591 main.go:141] libmachine: (addons-903003) DBG | Closing plugin on server side
	I0127 01:49:00.843942  905591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 01:49:00.843965  905591 main.go:141] libmachine: Successfully made call to close driver server
	I0127 01:49:00.843977  905591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 01:49:00.843991  905591 main.go:141] libmachine: Making call to close driver server
	I0127 01:49:00.844003  905591 main.go:141] libmachine: (addons-903003) Calling .Close
	I0127 01:49:00.843996  905591 main.go:141] libmachine: Making call to close driver server
	I0127 01:49:00.844067  905591 main.go:141] libmachine: (addons-903003) Calling .Close
	I0127 01:49:00.844098  905591 main.go:141] libmachine: (addons-903003) DBG | Closing plugin on server side
	I0127 01:49:00.844166  905591 main.go:141] libmachine: Successfully made call to close driver server
	I0127 01:49:00.844188  905591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 01:49:00.844325  905591 main.go:141] libmachine: Making call to close driver server
	I0127 01:49:00.844339  905591 main.go:141] libmachine: (addons-903003) Calling .Close
	I0127 01:49:00.844252  905591 main.go:141] libmachine: Successfully made call to close driver server
	I0127 01:49:00.844410  905591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 01:49:00.844271  905591 main.go:141] libmachine: (addons-903003) DBG | Closing plugin on server side
	I0127 01:49:00.844282  905591 main.go:141] libmachine: (addons-903003) DBG | Closing plugin on server side
	I0127 01:49:00.844593  905591 main.go:141] libmachine: (addons-903003) DBG | Closing plugin on server side
	I0127 01:49:00.844622  905591 main.go:141] libmachine: Successfully made call to close driver server
	I0127 01:49:00.844629  905591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 01:49:00.844670  905591 node_ready.go:35] waiting up to 6m0s for node "addons-903003" to be "Ready" ...
	I0127 01:49:00.844290  905591 main.go:141] libmachine: Successfully made call to close driver server
	I0127 01:49:00.844757  905591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 01:49:00.845034  905591 main.go:141] libmachine: Successfully made call to close driver server
	I0127 01:49:00.845154  905591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 01:49:00.845130  905591 main.go:141] libmachine: (addons-903003) DBG | Closing plugin on server side
	I0127 01:49:00.845180  905591 main.go:141] libmachine: Making call to close driver server
	I0127 01:49:00.845217  905591 main.go:141] libmachine: (addons-903003) Calling .Close
	I0127 01:49:00.845472  905591 main.go:141] libmachine: (addons-903003) DBG | Closing plugin on server side
	I0127 01:49:00.845519  905591 main.go:141] libmachine: Successfully made call to close driver server
	I0127 01:49:00.845536  905591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 01:49:00.855064  905591 node_ready.go:49] node "addons-903003" has status "Ready":"True"
	I0127 01:49:00.855099  905591 node_ready.go:38] duration metric: took 10.400257ms for node "addons-903003" to be "Ready" ...
	I0127 01:49:00.855112  905591 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 01:49:00.875989  905591 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-wqktz" in "kube-system" namespace to be "Ready" ...
	I0127 01:49:01.357777  905591 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-903003" context rescaled to 1 replicas
	I0127 01:49:01.650502  905591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.32544625s)
	I0127 01:49:01.650528  905591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.288505983s)
	I0127 01:49:01.650568  905591 main.go:141] libmachine: Making call to close driver server
	I0127 01:49:01.650582  905591 main.go:141] libmachine: (addons-903003) Calling .Close
	I0127 01:49:01.650592  905591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.256410973s)
	I0127 01:49:01.650629  905591 main.go:141] libmachine: Making call to close driver server
	I0127 01:49:01.650646  905591 main.go:141] libmachine: (addons-903003) Calling .Close
	I0127 01:49:01.650634  905591 main.go:141] libmachine: Making call to close driver server
	I0127 01:49:01.650702  905591 main.go:141] libmachine: (addons-903003) Calling .Close
	I0127 01:49:01.651112  905591 main.go:141] libmachine: (addons-903003) DBG | Closing plugin on server side
	I0127 01:49:01.651112  905591 main.go:141] libmachine: (addons-903003) DBG | Closing plugin on server side
	I0127 01:49:01.651125  905591 main.go:141] libmachine: (addons-903003) DBG | Closing plugin on server side
	I0127 01:49:01.651137  905591 main.go:141] libmachine: Successfully made call to close driver server
	I0127 01:49:01.651190  905591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 01:49:01.651203  905591 main.go:141] libmachine: Making call to close driver server
	I0127 01:49:01.651223  905591 main.go:141] libmachine: (addons-903003) Calling .Close
	I0127 01:49:01.651140  905591 main.go:141] libmachine: Successfully made call to close driver server
	I0127 01:49:01.651246  905591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 01:49:01.651262  905591 main.go:141] libmachine: Making call to close driver server
	I0127 01:49:01.651157  905591 main.go:141] libmachine: Successfully made call to close driver server
	I0127 01:49:01.651288  905591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 01:49:01.651300  905591 main.go:141] libmachine: Making call to close driver server
	I0127 01:49:01.651312  905591 main.go:141] libmachine: (addons-903003) Calling .Close
	I0127 01:49:01.651274  905591 main.go:141] libmachine: (addons-903003) Calling .Close
	I0127 01:49:01.651537  905591 main.go:141] libmachine: Successfully made call to close driver server
	I0127 01:49:01.651551  905591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 01:49:01.653181  905591 main.go:141] libmachine: Successfully made call to close driver server
	I0127 01:49:01.653198  905591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 01:49:01.653233  905591 main.go:141] libmachine: (addons-903003) DBG | Closing plugin on server side
	I0127 01:49:01.653243  905591 main.go:141] libmachine: Successfully made call to close driver server
	I0127 01:49:01.653257  905591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 01:49:01.728803  905591 main.go:141] libmachine: Making call to close driver server
	I0127 01:49:01.728830  905591 main.go:141] libmachine: (addons-903003) Calling .Close
	I0127 01:49:01.729112  905591 main.go:141] libmachine: Successfully made call to close driver server
	I0127 01:49:01.729154  905591 main.go:141] libmachine: (addons-903003) DBG | Closing plugin on server side
	I0127 01:49:01.729162  905591 main.go:141] libmachine: Making call to close connection to plugin binary
	W0127 01:49:01.729285  905591 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0127 01:49:01.754363  905591 main.go:141] libmachine: Making call to close driver server
	I0127 01:49:01.754392  905591 main.go:141] libmachine: (addons-903003) Calling .Close
	I0127 01:49:01.754694  905591 main.go:141] libmachine: Successfully made call to close driver server
	I0127 01:49:01.754717  905591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 01:49:01.754737  905591 main.go:141] libmachine: (addons-903003) DBG | Closing plugin on server side
	I0127 01:49:03.080011  905591 pod_ready.go:103] pod "amd-gpu-device-plugin-wqktz" in "kube-system" namespace has status "Ready":"False"
	I0127 01:49:03.567583  905591 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0127 01:49:03.567628  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHHostname
	I0127 01:49:03.571111  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:49:03.571533  905591 main.go:141] libmachine: (addons-903003) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:89:fd", ip: ""} in network mk-addons-903003: {Iface:virbr1 ExpiryTime:2025-01-27 02:48:24 +0000 UTC Type:0 Mac:52:54:00:2d:89:fd Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:addons-903003 Clientid:01:52:54:00:2d:89:fd}
	I0127 01:49:03.571566  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined IP address 192.168.39.61 and MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:49:03.571843  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHPort
	I0127 01:49:03.572082  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHKeyPath
	I0127 01:49:03.572307  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHUsername
	I0127 01:49:03.572469  905591 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/addons-903003/id_rsa Username:docker}
	I0127 01:49:03.619507  905591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.96646315s)
	I0127 01:49:03.619560  905591 main.go:141] libmachine: Making call to close driver server
	I0127 01:49:03.619570  905591 main.go:141] libmachine: (addons-903003) Calling .Close
	I0127 01:49:03.619864  905591 main.go:141] libmachine: Successfully made call to close driver server
	I0127 01:49:03.619957  905591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 01:49:03.619974  905591 main.go:141] libmachine: Making call to close driver server
	I0127 01:49:03.619984  905591 main.go:141] libmachine: (addons-903003) Calling .Close
	I0127 01:49:03.619916  905591 main.go:141] libmachine: (addons-903003) DBG | Closing plugin on server side
	I0127 01:49:03.620202  905591 main.go:141] libmachine: Successfully made call to close driver server
	I0127 01:49:03.620218  905591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 01:49:03.620245  905591 main.go:141] libmachine: (addons-903003) DBG | Closing plugin on server side
	I0127 01:49:03.913878  905591 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0127 01:49:04.008415  905591 addons.go:238] Setting addon gcp-auth=true in "addons-903003"
	I0127 01:49:04.008489  905591 host.go:66] Checking if "addons-903003" exists ...
	I0127 01:49:04.008849  905591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 01:49:04.008889  905591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 01:49:04.024344  905591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44051
	I0127 01:49:04.024857  905591 main.go:141] libmachine: () Calling .GetVersion
	I0127 01:49:04.025513  905591 main.go:141] libmachine: Using API Version  1
	I0127 01:49:04.025547  905591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 01:49:04.025899  905591 main.go:141] libmachine: () Calling .GetMachineName
	I0127 01:49:04.026431  905591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 01:49:04.026462  905591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 01:49:04.042014  905591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40955
	I0127 01:49:04.042480  905591 main.go:141] libmachine: () Calling .GetVersion
	I0127 01:49:04.043032  905591 main.go:141] libmachine: Using API Version  1
	I0127 01:49:04.043067  905591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 01:49:04.043443  905591 main.go:141] libmachine: () Calling .GetMachineName
	I0127 01:49:04.043663  905591 main.go:141] libmachine: (addons-903003) Calling .GetState
	I0127 01:49:04.045147  905591 main.go:141] libmachine: (addons-903003) Calling .DriverName
	I0127 01:49:04.045411  905591 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0127 01:49:04.045436  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHHostname
	I0127 01:49:04.048464  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:49:04.048842  905591 main.go:141] libmachine: (addons-903003) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:89:fd", ip: ""} in network mk-addons-903003: {Iface:virbr1 ExpiryTime:2025-01-27 02:48:24 +0000 UTC Type:0 Mac:52:54:00:2d:89:fd Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:addons-903003 Clientid:01:52:54:00:2d:89:fd}
	I0127 01:49:04.048872  905591 main.go:141] libmachine: (addons-903003) DBG | domain addons-903003 has defined IP address 192.168.39.61 and MAC address 52:54:00:2d:89:fd in network mk-addons-903003
	I0127 01:49:04.049052  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHPort
	I0127 01:49:04.049292  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHKeyPath
	I0127 01:49:04.049473  905591 main.go:141] libmachine: (addons-903003) Calling .GetSSHUsername
	I0127 01:49:04.049617  905591 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/addons-903003/id_rsa Username:docker}
	I0127 01:49:04.628758  905591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.90266527s)
	I0127 01:49:04.628809  905591 main.go:141] libmachine: Making call to close driver server
	I0127 01:49:04.628818  905591 main.go:141] libmachine: (addons-903003) Calling .Close
	I0127 01:49:04.628873  905591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.900149399s)
	I0127 01:49:04.628954  905591 main.go:141] libmachine: Making call to close driver server
	I0127 01:49:04.628966  905591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.890385111s)
	I0127 01:49:04.628976  905591 main.go:141] libmachine: (addons-903003) Calling .Close
	I0127 01:49:04.628987  905591 main.go:141] libmachine: Making call to close driver server
	I0127 01:49:04.629005  905591 main.go:141] libmachine: (addons-903003) Calling .Close
	I0127 01:49:04.629134  905591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.337107194s)
	W0127 01:49:04.629172  905591 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0127 01:49:04.629198  905591 retry.go:31] will retry after 335.44286ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0127 01:49:04.629236  905591 main.go:141] libmachine: Successfully made call to close driver server
	I0127 01:49:04.629255  905591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.265900327s)
	I0127 01:49:04.629259  905591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 01:49:04.629271  905591 main.go:141] libmachine: Making call to close driver server
	I0127 01:49:04.629278  905591 main.go:141] libmachine: (addons-903003) Calling .Close
	I0127 01:49:04.629278  905591 main.go:141] libmachine: Making call to close driver server
	I0127 01:49:04.629287  905591 main.go:141] libmachine: (addons-903003) Calling .Close
	I0127 01:49:04.629375  905591 main.go:141] libmachine: Successfully made call to close driver server
	I0127 01:49:04.629383  905591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 01:49:04.629375  905591 main.go:141] libmachine: (addons-903003) DBG | Closing plugin on server side
	I0127 01:49:04.629392  905591 main.go:141] libmachine: Making call to close driver server
	I0127 01:49:04.629399  905591 main.go:141] libmachine: (addons-903003) Calling .Close
	I0127 01:49:04.629441  905591 main.go:141] libmachine: Successfully made call to close driver server
	I0127 01:49:04.629453  905591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 01:49:04.629461  905591 main.go:141] libmachine: Making call to close driver server
	I0127 01:49:04.629465  905591 main.go:141] libmachine: Successfully made call to close driver server
	I0127 01:49:04.629469  905591 main.go:141] libmachine: (addons-903003) Calling .Close
	I0127 01:49:04.629473  905591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 01:49:04.629475  905591 main.go:141] libmachine: (addons-903003) DBG | Closing plugin on server side
	I0127 01:49:04.629484  905591 addons.go:479] Verifying addon registry=true in "addons-903003"
	I0127 01:49:04.629724  905591 main.go:141] libmachine: (addons-903003) DBG | Closing plugin on server side
	I0127 01:49:04.629754  905591 main.go:141] libmachine: (addons-903003) DBG | Closing plugin on server side
	I0127 01:49:04.629757  905591 main.go:141] libmachine: Successfully made call to close driver server
	I0127 01:49:04.629768  905591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 01:49:04.629754  905591 main.go:141] libmachine: Successfully made call to close driver server
	I0127 01:49:04.629776  905591 addons.go:479] Verifying addon metrics-server=true in "addons-903003"
	I0127 01:49:04.629783  905591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 01:49:04.629794  905591 addons.go:479] Verifying addon ingress=true in "addons-903003"
	I0127 01:49:04.631794  905591 main.go:141] libmachine: Successfully made call to close driver server
	I0127 01:49:04.631813  905591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 01:49:04.631841  905591 main.go:141] libmachine: Making call to close driver server
	I0127 01:49:04.631854  905591 main.go:141] libmachine: (addons-903003) Calling .Close
	I0127 01:49:04.631996  905591 out.go:177] * Verifying registry addon...
	I0127 01:49:04.632054  905591 out.go:177] * Verifying ingress addon...
	I0127 01:49:04.632998  905591 main.go:141] libmachine: (addons-903003) DBG | Closing plugin on server side
	I0127 01:49:04.633016  905591 main.go:141] libmachine: Successfully made call to close driver server
	I0127 01:49:04.633031  905591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 01:49:04.634046  905591 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-903003 service yakd-dashboard -n yakd-dashboard
	
	I0127 01:49:04.634087  905591 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0127 01:49:04.634321  905591 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0127 01:49:04.652273  905591 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0127 01:49:04.652299  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:04.665897  905591 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0127 01:49:04.665918  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:04.965473  905591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0127 01:49:05.140947  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:05.141392  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:05.386177  905591 pod_ready.go:103] pod "amd-gpu-device-plugin-wqktz" in "kube-system" namespace has status "Ready":"False"
	I0127 01:49:05.642280  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:05.642471  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:06.176552  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:06.177022  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:06.578555  905591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.624675269s)
	I0127 01:49:06.578599  905591 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.533160066s)
	I0127 01:49:06.578622  905591 main.go:141] libmachine: Making call to close driver server
	I0127 01:49:06.578642  905591 main.go:141] libmachine: (addons-903003) Calling .Close
	I0127 01:49:06.578945  905591 main.go:141] libmachine: Successfully made call to close driver server
	I0127 01:49:06.578963  905591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 01:49:06.578985  905591 main.go:141] libmachine: Making call to close driver server
	I0127 01:49:06.578993  905591 main.go:141] libmachine: (addons-903003) Calling .Close
	I0127 01:49:06.579236  905591 main.go:141] libmachine: Successfully made call to close driver server
	I0127 01:49:06.579257  905591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 01:49:06.579275  905591 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-903003"
	I0127 01:49:06.579280  905591 main.go:141] libmachine: (addons-903003) DBG | Closing plugin on server side
	I0127 01:49:06.579968  905591 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0127 01:49:06.581562  905591 out.go:177] * Verifying csi-hostpath-driver addon...
	I0127 01:49:06.582652  905591 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0127 01:49:06.583533  905591 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0127 01:49:06.583859  905591 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0127 01:49:06.583878  905591 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0127 01:49:06.587228  905591 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0127 01:49:06.587245  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:06.654071  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:06.655208  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:06.672265  905591 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0127 01:49:06.672299  905591 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0127 01:49:06.728499  905591 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0127 01:49:06.728537  905591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0127 01:49:06.799013  905591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0127 01:49:06.900879  905591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.935354392s)
	I0127 01:49:06.900956  905591 main.go:141] libmachine: Making call to close driver server
	I0127 01:49:06.900975  905591 main.go:141] libmachine: (addons-903003) Calling .Close
	I0127 01:49:06.901337  905591 main.go:141] libmachine: Successfully made call to close driver server
	I0127 01:49:06.901360  905591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 01:49:06.901376  905591 main.go:141] libmachine: Making call to close driver server
	I0127 01:49:06.901383  905591 main.go:141] libmachine: (addons-903003) DBG | Closing plugin on server side
	I0127 01:49:06.901386  905591 main.go:141] libmachine: (addons-903003) Calling .Close
	I0127 01:49:06.901694  905591 main.go:141] libmachine: Successfully made call to close driver server
	I0127 01:49:06.901706  905591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 01:49:07.089682  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:07.138032  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:07.139628  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:07.591740  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:07.739423  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:07.739925  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:07.923956  905591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.124881169s)
	I0127 01:49:07.924025  905591 main.go:141] libmachine: Making call to close driver server
	I0127 01:49:07.924042  905591 main.go:141] libmachine: (addons-903003) Calling .Close
	I0127 01:49:07.924378  905591 main.go:141] libmachine: Successfully made call to close driver server
	I0127 01:49:07.924401  905591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 01:49:07.924411  905591 main.go:141] libmachine: Making call to close driver server
	I0127 01:49:07.924418  905591 main.go:141] libmachine: (addons-903003) Calling .Close
	I0127 01:49:07.924406  905591 main.go:141] libmachine: (addons-903003) DBG | Closing plugin on server side
	I0127 01:49:07.924668  905591 main.go:141] libmachine: Successfully made call to close driver server
	I0127 01:49:07.924688  905591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 01:49:07.926549  905591 addons.go:479] Verifying addon gcp-auth=true in "addons-903003"
	I0127 01:49:07.928646  905591 out.go:177] * Verifying gcp-auth addon...
	I0127 01:49:07.931161  905591 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0127 01:49:07.939585  905591 pod_ready.go:103] pod "amd-gpu-device-plugin-wqktz" in "kube-system" namespace has status "Ready":"False"
	I0127 01:49:07.962724  905591 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0127 01:49:07.962759  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:08.112102  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:08.139122  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:08.139903  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:08.434781  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:08.588505  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:08.639260  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:08.639615  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:08.935589  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:09.088711  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:09.139211  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:09.139733  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:09.434979  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:09.587893  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:09.638971  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:09.639064  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:09.934990  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:10.088666  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:10.138187  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:10.138714  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:10.381815  905591 pod_ready.go:103] pod "amd-gpu-device-plugin-wqktz" in "kube-system" namespace has status "Ready":"False"
	I0127 01:49:10.434484  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:10.588228  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:10.638567  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:10.638729  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:10.935029  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:11.088773  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:11.138595  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:11.139571  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:11.434214  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:11.588274  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:11.642806  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:11.643531  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:11.935722  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:12.088942  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:12.139374  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:12.139530  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:12.444427  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:12.589098  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:12.689000  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:12.689257  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:12.882429  905591 pod_ready.go:103] pod "amd-gpu-device-plugin-wqktz" in "kube-system" namespace has status "Ready":"False"
	I0127 01:49:12.935210  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:13.088340  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:13.138593  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:13.138600  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:13.435741  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:13.589061  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:13.638371  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:13.639074  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:13.935383  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:14.088080  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:14.139190  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:14.139254  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:14.434212  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:14.588811  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:14.639741  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:14.639994  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:14.934638  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:15.088798  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:15.138639  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:15.138714  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:15.382282  905591 pod_ready.go:103] pod "amd-gpu-device-plugin-wqktz" in "kube-system" namespace has status "Ready":"False"
	I0127 01:49:15.435033  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:15.588915  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:15.639080  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:15.639252  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:15.936086  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:16.089412  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:16.138161  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:16.140652  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:16.381972  905591 pod_ready.go:93] pod "amd-gpu-device-plugin-wqktz" in "kube-system" namespace has status "Ready":"True"
	I0127 01:49:16.381998  905591 pod_ready.go:82] duration metric: took 15.505977788s for pod "amd-gpu-device-plugin-wqktz" in "kube-system" namespace to be "Ready" ...
	I0127 01:49:16.382008  905591 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-d75s8" in "kube-system" namespace to be "Ready" ...
	I0127 01:49:16.383568  905591 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-d75s8" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-d75s8" not found
	I0127 01:49:16.383589  905591 pod_ready.go:82] duration metric: took 1.574462ms for pod "coredns-668d6bf9bc-d75s8" in "kube-system" namespace to be "Ready" ...
	E0127 01:49:16.383597  905591 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-d75s8" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-d75s8" not found
	I0127 01:49:16.383603  905591 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-nwb4s" in "kube-system" namespace to be "Ready" ...
	I0127 01:49:16.387516  905591 pod_ready.go:93] pod "coredns-668d6bf9bc-nwb4s" in "kube-system" namespace has status "Ready":"True"
	I0127 01:49:16.387535  905591 pod_ready.go:82] duration metric: took 3.926571ms for pod "coredns-668d6bf9bc-nwb4s" in "kube-system" namespace to be "Ready" ...
	I0127 01:49:16.387543  905591 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-903003" in "kube-system" namespace to be "Ready" ...
	I0127 01:49:16.391637  905591 pod_ready.go:93] pod "etcd-addons-903003" in "kube-system" namespace has status "Ready":"True"
	I0127 01:49:16.391655  905591 pod_ready.go:82] duration metric: took 4.106518ms for pod "etcd-addons-903003" in "kube-system" namespace to be "Ready" ...
	I0127 01:49:16.391663  905591 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-903003" in "kube-system" namespace to be "Ready" ...
	I0127 01:49:16.395475  905591 pod_ready.go:93] pod "kube-apiserver-addons-903003" in "kube-system" namespace has status "Ready":"True"
	I0127 01:49:16.395493  905591 pod_ready.go:82] duration metric: took 3.825339ms for pod "kube-apiserver-addons-903003" in "kube-system" namespace to be "Ready" ...
	I0127 01:49:16.395502  905591 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-903003" in "kube-system" namespace to be "Ready" ...
	I0127 01:49:16.433992  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:16.579556  905591 pod_ready.go:93] pod "kube-controller-manager-addons-903003" in "kube-system" namespace has status "Ready":"True"
	I0127 01:49:16.579587  905591 pod_ready.go:82] duration metric: took 184.077161ms for pod "kube-controller-manager-addons-903003" in "kube-system" namespace to be "Ready" ...
	I0127 01:49:16.579600  905591 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vb6sz" in "kube-system" namespace to be "Ready" ...
	I0127 01:49:16.588627  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:16.638615  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:16.638963  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:16.936193  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:17.171816  905591 pod_ready.go:93] pod "kube-proxy-vb6sz" in "kube-system" namespace has status "Ready":"True"
	I0127 01:49:17.171859  905591 pod_ready.go:82] duration metric: took 592.247978ms for pod "kube-proxy-vb6sz" in "kube-system" namespace to be "Ready" ...
	I0127 01:49:17.171876  905591 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-903003" in "kube-system" namespace to be "Ready" ...
	I0127 01:49:17.173782  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:17.174573  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:17.175717  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:17.379538  905591 pod_ready.go:93] pod "kube-scheduler-addons-903003" in "kube-system" namespace has status "Ready":"True"
	I0127 01:49:17.379563  905591 pod_ready.go:82] duration metric: took 207.679939ms for pod "kube-scheduler-addons-903003" in "kube-system" namespace to be "Ready" ...
	I0127 01:49:17.379574  905591 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-lw57c" in "kube-system" namespace to be "Ready" ...
	I0127 01:49:17.434460  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:17.587654  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:17.638844  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:17.638871  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:17.935559  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:18.088293  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:18.138388  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:18.139278  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:18.434998  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:18.588327  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:18.638906  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:18.639757  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:18.935113  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:19.089372  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:19.139355  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:19.139941  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:19.386370  905591 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-lw57c" in "kube-system" namespace has status "Ready":"False"
	I0127 01:49:19.435379  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:19.589130  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:19.638600  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:19.638731  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:19.935720  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:20.087947  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:20.139896  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:20.140175  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:20.434716  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:20.587743  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:20.638204  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:20.639183  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:20.934829  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:21.089599  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:21.138872  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:21.139091  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:21.434501  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:21.589613  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:21.642304  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:21.643024  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:22.110820  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:22.112024  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:22.112486  905591 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-lw57c" in "kube-system" namespace has status "Ready":"False"
	I0127 01:49:22.213496  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:22.213717  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:22.436031  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:22.589546  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:22.640519  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:22.640680  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:22.935148  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:23.088997  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:23.138716  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:23.140430  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:23.434708  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:23.588437  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:23.638587  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:23.639356  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:23.935736  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:24.089262  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:24.138606  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:24.138828  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:24.387502  905591 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-lw57c" in "kube-system" namespace has status "Ready":"False"
	I0127 01:49:24.434377  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:24.588259  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:24.638212  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:24.638570  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:24.934635  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:25.088173  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:25.137921  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:25.138173  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:25.435045  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:25.588523  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:25.638454  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:25.638654  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:25.936381  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:26.090032  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:26.139972  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:26.140099  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:26.435187  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:26.588152  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:26.639967  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:26.640247  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:26.886311  905591 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-lw57c" in "kube-system" namespace has status "Ready":"False"
	I0127 01:49:26.935127  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:27.088520  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:27.138222  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:27.138545  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:27.434475  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:27.588710  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:27.640753  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:27.641191  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:27.934627  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:28.089300  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:28.139698  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:28.140283  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:28.434876  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:28.587978  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:28.638851  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:28.639336  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:28.935051  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:29.089191  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:29.139187  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:29.139198  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:29.385528  905591 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-lw57c" in "kube-system" namespace has status "Ready":"False"
	I0127 01:49:29.434775  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:29.588976  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:29.638622  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:29.639786  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:29.885317  905591 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-lw57c" in "kube-system" namespace has status "Ready":"True"
	I0127 01:49:29.885344  905591 pod_ready.go:82] duration metric: took 12.505764167s for pod "nvidia-device-plugin-daemonset-lw57c" in "kube-system" namespace to be "Ready" ...
	I0127 01:49:29.885355  905591 pod_ready.go:39] duration metric: took 29.030228774s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 01:49:29.885387  905591 api_server.go:52] waiting for apiserver process to appear ...
	I0127 01:49:29.885457  905591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 01:49:29.905502  905591 api_server.go:72] duration metric: took 33.318209798s to wait for apiserver process to appear ...
	I0127 01:49:29.905536  905591 api_server.go:88] waiting for apiserver healthz status ...
	I0127 01:49:29.905563  905591 api_server.go:253] Checking apiserver healthz at https://192.168.39.61:8443/healthz ...
	I0127 01:49:29.909992  905591 api_server.go:279] https://192.168.39.61:8443/healthz returned 200:
	ok
	I0127 01:49:29.911031  905591 api_server.go:141] control plane version: v1.32.1
	I0127 01:49:29.911063  905591 api_server.go:131] duration metric: took 5.51296ms to wait for apiserver health ...
	I0127 01:49:29.911071  905591 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 01:49:29.921828  905591 system_pods.go:59] 18 kube-system pods found
	I0127 01:49:29.921861  905591 system_pods.go:61] "amd-gpu-device-plugin-wqktz" [9013ece5-ea83-4c27-8d8e-a446d722ef47] Running
	I0127 01:49:29.921866  905591 system_pods.go:61] "coredns-668d6bf9bc-nwb4s" [a9cae9ba-d093-43cf-9a19-8f028de96946] Running
	I0127 01:49:29.921874  905591 system_pods.go:61] "csi-hostpath-attacher-0" [cdd9382d-234a-4225-a25f-fc9ba54f929d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0127 01:49:29.921880  905591 system_pods.go:61] "csi-hostpath-resizer-0" [5bc032a0-aba8-4036-aa26-bbf0e9238342] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0127 01:49:29.921890  905591 system_pods.go:61] "csi-hostpathplugin-nqvx4" [597dba22-bb11-4ce8-bbce-c97797ccffdc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0127 01:49:29.921895  905591 system_pods.go:61] "etcd-addons-903003" [23a5a8d1-c2cf-4ee0-9eeb-1ef0c58c81bc] Running
	I0127 01:49:29.921899  905591 system_pods.go:61] "kube-apiserver-addons-903003" [873a88a5-e912-40bd-af0d-8b360ff21dd1] Running
	I0127 01:49:29.921903  905591 system_pods.go:61] "kube-controller-manager-addons-903003" [040f76ad-4d96-4085-a36b-d89a1ca256fd] Running
	I0127 01:49:29.921909  905591 system_pods.go:61] "kube-ingress-dns-minikube" [9d7f42e5-7282-4522-a482-d389191a1a9b] Running
	I0127 01:49:29.921913  905591 system_pods.go:61] "kube-proxy-vb6sz" [7b775e84-eda4-43e3-8e2a-2bfdcd9da2d3] Running
	I0127 01:49:29.921917  905591 system_pods.go:61] "kube-scheduler-addons-903003" [93dae656-b140-4f5e-a488-e49a239219f7] Running
	I0127 01:49:29.921923  905591 system_pods.go:61] "metrics-server-7fbb699795-p5dvw" [62418a49-5783-4e0b-9352-6dbf4a067aac] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 01:49:29.921930  905591 system_pods.go:61] "nvidia-device-plugin-daemonset-lw57c" [a69ac6c0-f8c9-4eb0-9fb3-35c983c843b7] Running
	I0127 01:49:29.921936  905591 system_pods.go:61] "registry-6c88467877-nqsd9" [9f2c82f7-c7e4-40be-ace7-48ae48867e71] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0127 01:49:29.921941  905591 system_pods.go:61] "registry-proxy-wg8ff" [aa6c2e8e-0eae-47b5-b60e-a503a7c6de28] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0127 01:49:29.921950  905591 system_pods.go:61] "snapshot-controller-68b874b76f-55zqk" [34218051-32b9-4cc4-9774-7bb55de887b6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0127 01:49:29.921959  905591 system_pods.go:61] "snapshot-controller-68b874b76f-tlbtr" [020e6c29-561d-4141-9ef6-39c6e757894e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0127 01:49:29.921963  905591 system_pods.go:61] "storage-provisioner" [6e1d34de-6bc3-42ea-b921-70e7f532d594] Running
	I0127 01:49:29.921971  905591 system_pods.go:74] duration metric: took 10.895877ms to wait for pod list to return data ...
	I0127 01:49:29.921980  905591 default_sa.go:34] waiting for default service account to be created ...
	I0127 01:49:29.924211  905591 default_sa.go:45] found service account: "default"
	I0127 01:49:29.924232  905591 default_sa.go:55] duration metric: took 2.246113ms for default service account to be created ...
	I0127 01:49:29.924239  905591 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 01:49:29.931072  905591 system_pods.go:87] 18 kube-system pods found
	I0127 01:49:29.933938  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:29.935119  905591 system_pods.go:105] "amd-gpu-device-plugin-wqktz" [9013ece5-ea83-4c27-8d8e-a446d722ef47] Running
	I0127 01:49:29.935143  905591 system_pods.go:105] "coredns-668d6bf9bc-nwb4s" [a9cae9ba-d093-43cf-9a19-8f028de96946] Running
	I0127 01:49:29.935152  905591 system_pods.go:105] "csi-hostpath-attacher-0" [cdd9382d-234a-4225-a25f-fc9ba54f929d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0127 01:49:29.935161  905591 system_pods.go:105] "csi-hostpath-resizer-0" [5bc032a0-aba8-4036-aa26-bbf0e9238342] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0127 01:49:29.935168  905591 system_pods.go:105] "csi-hostpathplugin-nqvx4" [597dba22-bb11-4ce8-bbce-c97797ccffdc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0127 01:49:29.935174  905591 system_pods.go:105] "etcd-addons-903003" [23a5a8d1-c2cf-4ee0-9eeb-1ef0c58c81bc] Running
	I0127 01:49:29.935179  905591 system_pods.go:105] "kube-apiserver-addons-903003" [873a88a5-e912-40bd-af0d-8b360ff21dd1] Running
	I0127 01:49:29.935187  905591 system_pods.go:105] "kube-controller-manager-addons-903003" [040f76ad-4d96-4085-a36b-d89a1ca256fd] Running
	I0127 01:49:29.935193  905591 system_pods.go:105] "kube-ingress-dns-minikube" [9d7f42e5-7282-4522-a482-d389191a1a9b] Running
	I0127 01:49:29.935198  905591 system_pods.go:105] "kube-proxy-vb6sz" [7b775e84-eda4-43e3-8e2a-2bfdcd9da2d3] Running
	I0127 01:49:29.935204  905591 system_pods.go:105] "kube-scheduler-addons-903003" [93dae656-b140-4f5e-a488-e49a239219f7] Running
	I0127 01:49:29.935211  905591 system_pods.go:105] "metrics-server-7fbb699795-p5dvw" [62418a49-5783-4e0b-9352-6dbf4a067aac] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 01:49:29.935218  905591 system_pods.go:105] "nvidia-device-plugin-daemonset-lw57c" [a69ac6c0-f8c9-4eb0-9fb3-35c983c843b7] Running
	I0127 01:49:29.935225  905591 system_pods.go:105] "registry-6c88467877-nqsd9" [9f2c82f7-c7e4-40be-ace7-48ae48867e71] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0127 01:49:29.935231  905591 system_pods.go:105] "registry-proxy-wg8ff" [aa6c2e8e-0eae-47b5-b60e-a503a7c6de28] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0127 01:49:29.935241  905591 system_pods.go:105] "snapshot-controller-68b874b76f-55zqk" [34218051-32b9-4cc4-9774-7bb55de887b6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0127 01:49:29.935249  905591 system_pods.go:105] "snapshot-controller-68b874b76f-tlbtr" [020e6c29-561d-4141-9ef6-39c6e757894e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0127 01:49:29.935257  905591 system_pods.go:105] "storage-provisioner" [6e1d34de-6bc3-42ea-b921-70e7f532d594] Running
	I0127 01:49:29.935265  905591 system_pods.go:147] duration metric: took 11.020438ms to wait for k8s-apps to be running ...
	I0127 01:49:29.935273  905591 system_svc.go:44] waiting for kubelet service to be running ....
	I0127 01:49:29.935315  905591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 01:49:29.950503  905591 system_svc.go:56] duration metric: took 15.220356ms WaitForService to wait for kubelet
	I0127 01:49:29.950535  905591 kubeadm.go:582] duration metric: took 33.363248266s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 01:49:29.950561  905591 node_conditions.go:102] verifying NodePressure condition ...
	I0127 01:49:29.953151  905591 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 01:49:29.953185  905591 node_conditions.go:123] node cpu capacity is 2
	I0127 01:49:29.953202  905591 node_conditions.go:105] duration metric: took 2.636292ms to run NodePressure ...
	I0127 01:49:29.953220  905591 start.go:241] waiting for startup goroutines ...
	I0127 01:49:30.088611  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:30.138440  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:30.139371  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:30.434320  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:30.588641  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:30.638417  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:30.638906  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:30.935008  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:31.088299  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:31.138246  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:31.138409  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:31.435436  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:31.589097  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:31.638598  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:31.638825  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:31.934822  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:32.089083  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:32.139042  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:32.139914  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:32.434837  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:32.588883  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:32.638504  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:32.639377  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:32.935222  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:33.088199  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:33.138069  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:33.138600  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:33.435372  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:33.587868  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:33.638806  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:33.639183  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:33.935249  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:34.088260  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:34.138151  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:34.138625  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:34.435229  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:34.588696  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:34.638438  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:34.638684  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:34.934566  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:35.088315  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:35.139010  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:35.139043  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:35.435190  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:35.588641  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:35.638885  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:35.639453  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:35.935786  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:36.087717  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:36.138227  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:36.138292  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:36.435729  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:36.589229  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:36.637481  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:36.638170  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:36.935393  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:37.088426  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:37.138509  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:37.138681  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:37.435493  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:37.588749  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:37.638556  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:37.639066  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:37.935203  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:38.089280  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:38.138286  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:38.139041  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:38.435338  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:38.588633  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:38.638141  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:38.639097  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:38.934859  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:39.088010  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:39.137889  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:39.138739  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:39.434437  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:39.588711  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:39.638596  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:39.640072  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:39.934991  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:40.088221  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:40.138420  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:40.138432  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:40.435776  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:40.588048  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:40.638999  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:40.639343  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:40.935689  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:41.089143  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:41.138409  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:41.138649  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:41.435099  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:41.588825  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:41.639008  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:41.639242  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:41.935814  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:42.087435  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:42.138362  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:42.138917  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:42.434791  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:42.588533  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:42.637719  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:42.639227  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:42.935173  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:43.091983  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:43.141022  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:43.141214  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:43.434747  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:43.587316  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:43.638400  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:43.638776  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:43.940162  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:44.088098  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:44.138459  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:44.139305  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:44.435606  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:44.591222  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:44.639361  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:44.639493  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:44.935332  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:45.089648  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:45.139871  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:45.140259  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:45.435031  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:45.587869  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:45.639609  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:45.639639  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:45.934498  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:46.088526  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:46.138153  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:46.138494  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:46.435184  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:46.589482  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:46.638139  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:46.638962  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:46.935377  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:47.089438  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:47.138441  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:47.138614  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:47.435148  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:47.592321  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:47.640826  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:47.641009  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:47.934556  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:48.088660  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:48.138563  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:48.139677  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:48.435358  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:48.588267  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:48.639246  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:48.639327  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:48.936168  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:49.088134  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:49.138150  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:49.138380  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:49.435070  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:49.589055  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:49.638842  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:49.639212  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:49.934860  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:50.088018  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:50.139107  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:50.139518  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:50.434487  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:50.589104  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:50.638607  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:50.639038  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:50.936428  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:51.088292  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:51.137931  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:51.139479  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:51.434963  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:51.588987  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:51.638223  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:51.638841  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:51.935560  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:52.088644  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:52.138433  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:52.139050  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:52.434444  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:52.588580  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:52.639123  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:52.639473  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:52.934977  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:53.088076  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:53.138360  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:53.138495  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 01:49:53.435730  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:53.596551  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:53.642298  905591 kapi.go:107] duration metric: took 49.007973186s to wait for kubernetes.io/minikube-addons=registry ...
	I0127 01:49:53.643687  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:53.935583  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:54.088412  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:54.138003  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:54.434728  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:54.588537  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:54.638668  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:54.935266  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:55.088814  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:55.138428  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:55.434248  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:55.588279  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:55.638086  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:55.934586  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:56.092460  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:56.137908  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:56.434284  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:56.588895  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:56.638543  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:56.934132  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:57.087416  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:57.138417  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:57.435241  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:57.588372  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:57.638413  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:57.934851  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:58.090580  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:58.615874  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:58.616179  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:58.616297  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:58.638287  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:58.936312  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:59.088677  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:59.137979  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:59.435409  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:49:59.589398  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:49:59.639031  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:49:59.934801  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:00.088731  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:00.138010  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:00.434146  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:00.588721  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:00.638868  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:00.935630  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:01.088635  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:01.142154  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:01.434991  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:01.591303  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:01.640042  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:01.935233  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:02.088558  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:02.137542  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:02.435332  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:02.588857  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:02.638490  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:03.214723  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:03.221453  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:03.222347  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:03.438835  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:03.588382  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:03.637957  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:03.934738  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:04.088115  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:04.138923  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:04.434326  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:04.588286  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:04.638797  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:04.935312  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:05.088408  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:05.138024  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:05.435671  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:05.589497  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:05.639419  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:05.934192  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:06.089162  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:06.146205  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:06.434843  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:06.587772  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:06.638570  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:06.935039  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:07.087402  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:07.138480  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:07.434873  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:07.587658  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:07.639189  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:07.934626  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:08.550153  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:08.550368  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:08.551099  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:08.650843  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:08.651429  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:08.935529  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:09.089235  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:09.138605  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:09.435795  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:09.588496  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:09.638168  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:09.934705  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:10.088053  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:10.138513  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:10.434493  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:10.588535  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:10.638349  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:10.935437  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:11.088335  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:11.138297  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:11.435218  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:11.589161  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:11.638955  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:12.193282  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:12.194329  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:12.195144  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:12.434416  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:12.589445  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:12.720119  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:12.935140  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:13.088919  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:13.140061  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:13.435913  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:13.591046  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:13.645554  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:13.935157  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:14.087845  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:14.139333  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:14.435070  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:14.588070  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:14.638029  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:14.934672  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:15.088051  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:15.138154  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:15.445535  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:15.611286  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:15.710757  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:15.935859  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:16.088868  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:16.138882  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:16.435128  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:16.587985  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:16.638264  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:16.934304  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:17.095052  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:17.138955  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:17.434368  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:17.588281  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:17.637632  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:17.935210  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:18.090016  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:18.138947  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:18.436332  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:18.595491  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:18.696345  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:18.935879  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:19.088294  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:19.138149  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:19.434409  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:19.588126  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:19.638794  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:20.295912  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:20.296316  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:20.296430  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:20.435476  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:20.589030  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:20.690889  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:20.935039  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:21.088457  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:21.139580  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:21.436332  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:21.588266  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:21.638059  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:21.935968  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:22.087458  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:22.138393  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:22.435388  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:22.587949  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:22.639140  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:22.934716  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:23.089590  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:23.138448  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:23.436309  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:23.588956  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:23.638968  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:23.935488  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:24.089525  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:24.138612  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:24.436082  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:24.588690  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:24.638386  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:24.935127  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:25.088141  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:25.137712  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:25.435273  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:25.588493  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:25.637997  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:25.934383  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:26.088472  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:26.138630  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:26.435383  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:26.590428  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:26.639967  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:26.935040  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:27.088898  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:27.138622  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:27.434585  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:27.588411  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:27.639034  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:27.935087  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:28.088422  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 01:50:28.138753  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:28.435531  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:28.588513  905591 kapi.go:107] duration metric: took 1m22.004974609s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0127 01:50:28.638404  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:28.935686  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:29.138479  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:29.435274  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:29.638782  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:29.934438  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:30.138882  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:30.434639  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:30.638340  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:30.935519  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:31.138542  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:31.435137  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:31.638781  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:31.934666  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:32.138732  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:32.434737  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:32.638564  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:32.935356  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:33.138427  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:33.434911  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:33.638645  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:33.935458  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:34.138301  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:34.435071  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:34.639772  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:34.935440  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:35.138519  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:35.435156  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:35.639036  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:35.937248  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:36.139000  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:36.434326  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:36.638116  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:36.935433  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:37.138038  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:37.434331  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:37.637691  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:37.934121  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:38.139049  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:38.434940  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:38.639035  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:38.934467  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:39.137953  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:39.434719  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:39.639728  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:39.934629  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:40.138475  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:40.434294  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:40.640880  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:40.935247  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:41.138608  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:41.435067  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:41.644690  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:41.934201  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:42.139707  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:42.434174  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:42.638924  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:42.935228  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:43.138833  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:43.434364  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:43.638565  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:43.935095  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:44.138708  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:44.435493  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:44.638397  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:44.935481  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:45.138680  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:45.434397  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:45.640483  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:45.935221  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:46.139176  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:46.434673  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:46.637958  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:46.934859  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:47.138477  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:47.434982  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:47.638988  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:47.934537  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:48.138426  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:48.434887  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:48.638687  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:48.935024  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:49.138602  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:49.438935  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:49.638699  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:49.934011  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:50.138583  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:50.434464  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:50.638584  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:50.935552  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:51.138790  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:51.434307  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:51.640644  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:51.935133  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:52.140214  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:52.434879  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:52.638768  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:52.935667  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:53.138714  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:53.434342  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:53.639995  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:53.934573  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:54.138169  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:54.435227  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:54.639280  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:54.935968  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:55.139021  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:55.436132  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:55.643417  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:55.935788  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:56.139093  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:56.435164  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:56.638833  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:56.937982  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:57.138677  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:57.435633  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:57.639361  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:57.936138  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:58.138808  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:58.434765  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:58.638455  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:58.935665  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:59.138983  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:59.434740  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:50:59.640562  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:50:59.936146  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:00.138171  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:00.434271  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:00.639661  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:00.934873  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:01.139286  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:01.434671  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:01.638836  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:01.935047  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:02.139175  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:02.434730  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:02.639528  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:02.935270  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:03.139515  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:03.435781  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:03.641051  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:03.935454  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:04.138396  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:04.435829  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:04.639088  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:04.935446  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:05.138319  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:05.434559  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:05.642878  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:05.934518  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:06.138328  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:06.435261  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:06.638947  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:06.935141  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:07.139038  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:07.434858  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:07.638922  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:07.934955  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:08.139558  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:08.434952  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:08.638818  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:08.934703  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:09.139008  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:09.435564  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:09.644663  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:09.935547  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:10.139348  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:10.434612  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:10.639027  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:10.934863  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:11.138823  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:11.434604  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:11.638590  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:11.934970  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:12.139222  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:12.435203  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:12.639637  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:12.934513  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:13.138370  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:13.435742  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:13.641896  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:13.936969  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:14.140317  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:14.435593  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:14.638332  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:14.935183  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:15.138274  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:15.434820  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:15.638680  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:15.934162  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:16.139337  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:16.435414  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:16.638354  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:16.935413  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:17.137787  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:17.434443  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:17.643236  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:17.935209  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:18.138981  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:18.434853  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:18.639721  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:18.934850  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:19.139146  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:19.436013  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:19.641468  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:19.935723  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:20.138680  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:20.435361  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:20.637886  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:20.934662  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:21.138405  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:21.435227  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:21.639438  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:21.934541  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:22.138579  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:22.435439  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:22.637640  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:22.935232  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:23.139502  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:23.434985  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:23.644396  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:23.935687  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:24.140293  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:24.436517  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:24.638392  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:24.934765  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:25.138869  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:25.435058  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:25.639125  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:25.934321  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:26.139482  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:26.434921  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:26.640288  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:26.936163  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:27.139168  905591 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 01:51:27.435299  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:27.639979  905591 kapi.go:107] duration metric: took 2m23.005886962s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0127 01:51:27.934653  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:28.435656  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:28.938740  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:29.435003  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:29.934448  905591 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 01:51:30.439412  905591 kapi.go:107] duration metric: took 2m22.508245104s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0127 01:51:30.440883  905591 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-903003 cluster.
	I0127 01:51:30.442167  905591 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0127 01:51:30.443622  905591 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0127 01:51:30.444990  905591 out.go:177] * Enabled addons: amd-gpu-device-plugin, cloud-spanner, storage-provisioner, nvidia-device-plugin, ingress-dns, storage-provisioner-rancher, inspektor-gadget, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0127 01:51:30.446212  905591 addons.go:514] duration metric: took 2m33.8589228s for enable addons: enabled=[amd-gpu-device-plugin cloud-spanner storage-provisioner nvidia-device-plugin ingress-dns storage-provisioner-rancher inspektor-gadget metrics-server yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0127 01:51:30.446277  905591 start.go:246] waiting for cluster config update ...
	I0127 01:51:30.446309  905591 start.go:255] writing updated cluster config ...
	I0127 01:51:30.446691  905591 ssh_runner.go:195] Run: rm -f paused
	I0127 01:51:30.503403  905591 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 01:51:30.505100  905591 out.go:177] * Done! kubectl is now configured to use "addons-903003" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 27 01:54:45 addons-903003 crio[660]: time="2025-01-27 01:54:45.871792449Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737942885871768352,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1946c786-bbee-4aac-b852-94f5a00d4680 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 01:54:45 addons-903003 crio[660]: time="2025-01-27 01:54:45.872339501Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8f182876-a5af-4a3f-918e-81a75d435fc0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 01:54:45 addons-903003 crio[660]: time="2025-01-27 01:54:45.872406498Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8f182876-a5af-4a3f-918e-81a75d435fc0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 01:54:45 addons-903003 crio[660]: time="2025-01-27 01:54:45.872722004Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e5700f43a7344cec1193500a12af914ca8abbae3c9d94804402d76597f7d47d,PodSandboxId:a6c73b95b65040aa9a9367b49dc24ec71342b24705665408dba5d7cc4966afbb,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3,State:CONTAINER_RUNNING,CreatedAt:1737942747712062811,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 67533d9e-df51-412f-a10f-c3983796b129,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56a88a7782117c71bb038043f49bec34a3835832128067bbd050b03f77c63b72,PodSandboxId:7d08bd19208561b817accbf916e7797e76df8f6a4d74879bb172e2fc63452640,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1737942696973346421,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1212f3bd-35e3-41c2-9a82-bcfd56ffc644,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2c45d7d3d054bf5e48326b5fcaa8a622e5632dd878d38378a037ac87abadb4a,PodSandboxId:0485f7aee138b83e2e61cfe9b470bc30db86eea0ea06c71afa893f6d7b0b89ff,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1737942686497620602,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-xznn8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f7a4b017-9d0e-4cd1-b526-f7be059a71a6,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:57ceb56c95c04e53e43c18b1333bfc4ff54e3c07bd559560e9d0af796f90d4a1,PodSandboxId:490c3c0803bc5c1971e76b2214b63f202ebeaf6d7b10b55965040e38f16919e8,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737942615565813460,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dpms5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 687ff59f-9141-426d-ac55-9e503670a652,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be61b24573a90ce0bf5269ffd6cfd6c92f034003a119c0dec93d3de33bd69525,PodSandboxId:f6ca4144c89446e49fa6091a785b7a0af6c5c79db2d981608f4ed0ae74d9e139,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737942615435884368,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-hjnbs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e1606cf8-eec5-4a7a-b37c-88197bbd6371,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bf157605b382d43bf327733644428c88446173935028bad34f970cf9d0bb2ca,PodSandboxId:8974f3a420738331f12d6d8c6e18d16a2ea0c0119bc9d188e5f17ec233fe55de,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1737942555238762422,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-wqktz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9013ece5-ea83-4c27-8d8e-a446d722ef47,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19744d14520dce16cf77234d30279887f28cbe2fc9e9917321f4a270fb7b63be,PodSandboxId:f9c2627436195c27a743527925ef4b95a8b10c5525404d1a80e898882e957a18,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de3
5f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1737942552571958488,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d7f42e5-7282-4522-a482-d389191a1a9b,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3a546b9ecb226863dae429d6e56f6a3121105316a155f1d38f896f25755c0f8,PodSandboxId:9b834ed17dd6c2842440d7d7d952710b2fbc5ce3ca7dd4710b96c55ad3d81281,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737942542386827174,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e1d34de-6bc3-42ea-b921-70e7f532d594,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2d11ecfb2fdac68d4a57cf7961417f17bc4c891986a83d3f913d7b3fa84f4fc,PodSandboxId:2db8283e79ee364bf9626c8f6d964d54f3b3a63f286beb1a07e212fbb1a715c9,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737942540564526191,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-nwb4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9cae9ba-d093-43cf-9a19-8f028de96946,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:c2116d528ea98dad2c7c0944717a49cfcf0cc34384ee311de1399a92a004aab0,PodSandboxId:dfb059772a3468466a76b43bf5d3eaf4bd5b7e7f56a62259c18cd2d132ab98ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737942537474961477,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vb6sz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b775e84-eda4-43e3-8e2a-2bfdcd9da2d3,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de737e59fe297b33460fc65a4cd9
a166510b84f45473e8462e20570acfde5d49,PodSandboxId:c2f927956b9a4130246f6f5403f018b3d7c69d370c6e9d9b0ff3bceb91bbc0d8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737942526809858952,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-903003,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5819a4e0c9422c66a007673dd763eed7,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac6c177969bb04a4f8c6c984ec988869973e88f4eccc3
0fa88772b397cd0b272,PodSandboxId:5ca4cfbc100e54cf81b07e95b17223b03a9bf72c33d05f9b5095506912d88478,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737942526765795891,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-903003,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b17897e26cd7516652e8f65ed8105e4,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b075bdc01dece618b853b45378831952cd05ecb82ddeaf1db3c2ca3386134b2,PodSandboxId:83b53fba0e5815
529084e4c404f3fcc9ff8a9d01167a69a6b0a47dacd88c35af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737942526748236972,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-903003,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d6b9fbac6ddc16ca9066be46f7c53c6,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d19f73e994612e11b8d2216395d21ff36d696e430d03d3c56d2a67274733289,PodSandboxId:ddafbc366230bbb616ca40259627e1e
7ef2726a08d697c72c16701759b02f085,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737942526699109749,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-903003,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 968b3db183219ef8850c535b2c371256,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8f182876-a5af-4a3f-918e-81a75d435fc0 name=/runtime.v1.RuntimeServ
ice/ListContainers
	Jan 27 01:54:45 addons-903003 crio[660]: time="2025-01-27 01:54:45.913594483Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bb7f0e7a-a13b-4eb1-aecd-8056157bc4f0 name=/runtime.v1.RuntimeService/Version
	Jan 27 01:54:45 addons-903003 crio[660]: time="2025-01-27 01:54:45.913674138Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bb7f0e7a-a13b-4eb1-aecd-8056157bc4f0 name=/runtime.v1.RuntimeService/Version
	Jan 27 01:54:45 addons-903003 crio[660]: time="2025-01-27 01:54:45.915119440Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=24b91a03-f28c-4586-b332-2f19469e9877 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 01:54:45 addons-903003 crio[660]: time="2025-01-27 01:54:45.916247451Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737942885916221108,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=24b91a03-f28c-4586-b332-2f19469e9877 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 01:54:45 addons-903003 crio[660]: time="2025-01-27 01:54:45.916772757Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3daa8a1e-3140-4013-9ee9-00ed51ef893e name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 01:54:45 addons-903003 crio[660]: time="2025-01-27 01:54:45.916840562Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3daa8a1e-3140-4013-9ee9-00ed51ef893e name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 01:54:45 addons-903003 crio[660]: time="2025-01-27 01:54:45.922680236Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e5700f43a7344cec1193500a12af914ca8abbae3c9d94804402d76597f7d47d,PodSandboxId:a6c73b95b65040aa9a9367b49dc24ec71342b24705665408dba5d7cc4966afbb,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3,State:CONTAINER_RUNNING,CreatedAt:1737942747712062811,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 67533d9e-df51-412f-a10f-c3983796b129,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56a88a7782117c71bb038043f49bec34a3835832128067bbd050b03f77c63b72,PodSandboxId:7d08bd19208561b817accbf916e7797e76df8f6a4d74879bb172e2fc63452640,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1737942696973346421,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1212f3bd-35e3-41c2-9a82-bcfd56ffc644,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2c45d7d3d054bf5e48326b5fcaa8a622e5632dd878d38378a037ac87abadb4a,PodSandboxId:0485f7aee138b83e2e61cfe9b470bc30db86eea0ea06c71afa893f6d7b0b89ff,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1737942686497620602,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-xznn8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f7a4b017-9d0e-4cd1-b526-f7be059a71a6,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:57ceb56c95c04e53e43c18b1333bfc4ff54e3c07bd559560e9d0af796f90d4a1,PodSandboxId:490c3c0803bc5c1971e76b2214b63f202ebeaf6d7b10b55965040e38f16919e8,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737942615565813460,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dpms5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 687ff59f-9141-426d-ac55-9e503670a652,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be61b24573a90ce0bf5269ffd6cfd6c92f034003a119c0dec93d3de33bd69525,PodSandboxId:f6ca4144c89446e49fa6091a785b7a0af6c5c79db2d981608f4ed0ae74d9e139,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737942615435884368,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-hjnbs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e1606cf8-eec5-4a7a-b37c-88197bbd6371,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bf157605b382d43bf327733644428c88446173935028bad34f970cf9d0bb2ca,PodSandboxId:8974f3a420738331f12d6d8c6e18d16a2ea0c0119bc9d188e5f17ec233fe55de,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1737942555238762422,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-wqktz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9013ece5-ea83-4c27-8d8e-a446d722ef47,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19744d14520dce16cf77234d30279887f28cbe2fc9e9917321f4a270fb7b63be,PodSandboxId:f9c2627436195c27a743527925ef4b95a8b10c5525404d1a80e898882e957a18,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de3
5f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1737942552571958488,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d7f42e5-7282-4522-a482-d389191a1a9b,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3a546b9ecb226863dae429d6e56f6a3121105316a155f1d38f896f25755c0f8,PodSandboxId:9b834ed17dd6c2842440d7d7d952710b2fbc5ce3ca7dd4710b96c55ad3d81281,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737942542386827174,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e1d34de-6bc3-42ea-b921-70e7f532d594,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2d11ecfb2fdac68d4a57cf7961417f17bc4c891986a83d3f913d7b3fa84f4fc,PodSandboxId:2db8283e79ee364bf9626c8f6d964d54f3b3a63f286beb1a07e212fbb1a715c9,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737942540564526191,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-nwb4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9cae9ba-d093-43cf-9a19-8f028de96946,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:c2116d528ea98dad2c7c0944717a49cfcf0cc34384ee311de1399a92a004aab0,PodSandboxId:dfb059772a3468466a76b43bf5d3eaf4bd5b7e7f56a62259c18cd2d132ab98ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737942537474961477,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vb6sz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b775e84-eda4-43e3-8e2a-2bfdcd9da2d3,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de737e59fe297b33460fc65a4cd9
a166510b84f45473e8462e20570acfde5d49,PodSandboxId:c2f927956b9a4130246f6f5403f018b3d7c69d370c6e9d9b0ff3bceb91bbc0d8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737942526809858952,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-903003,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5819a4e0c9422c66a007673dd763eed7,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac6c177969bb04a4f8c6c984ec988869973e88f4eccc3
0fa88772b397cd0b272,PodSandboxId:5ca4cfbc100e54cf81b07e95b17223b03a9bf72c33d05f9b5095506912d88478,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737942526765795891,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-903003,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b17897e26cd7516652e8f65ed8105e4,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b075bdc01dece618b853b45378831952cd05ecb82ddeaf1db3c2ca3386134b2,PodSandboxId:83b53fba0e5815
529084e4c404f3fcc9ff8a9d01167a69a6b0a47dacd88c35af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737942526748236972,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-903003,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d6b9fbac6ddc16ca9066be46f7c53c6,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d19f73e994612e11b8d2216395d21ff36d696e430d03d3c56d2a67274733289,PodSandboxId:ddafbc366230bbb616ca40259627e1e
7ef2726a08d697c72c16701759b02f085,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737942526699109749,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-903003,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 968b3db183219ef8850c535b2c371256,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3daa8a1e-3140-4013-9ee9-00ed51ef893e name=/runtime.v1.RuntimeServ
ice/ListContainers
	Jan 27 01:54:45 addons-903003 crio[660]: time="2025-01-27 01:54:45.954554180Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e1e3d931-e1ad-41c7-afd9-111ef9d6835f name=/runtime.v1.RuntimeService/Version
	Jan 27 01:54:45 addons-903003 crio[660]: time="2025-01-27 01:54:45.954642980Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e1e3d931-e1ad-41c7-afd9-111ef9d6835f name=/runtime.v1.RuntimeService/Version
	Jan 27 01:54:45 addons-903003 crio[660]: time="2025-01-27 01:54:45.955596814Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=695e6cd4-1c1d-4eab-9b80-794eef6c43a2 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 01:54:45 addons-903003 crio[660]: time="2025-01-27 01:54:45.956913861Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737942885956887435,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=695e6cd4-1c1d-4eab-9b80-794eef6c43a2 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 01:54:45 addons-903003 crio[660]: time="2025-01-27 01:54:45.957465521Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4f557b4c-6e16-48b9-9f2e-836936b410d6 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 01:54:45 addons-903003 crio[660]: time="2025-01-27 01:54:45.957527118Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4f557b4c-6e16-48b9-9f2e-836936b410d6 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 01:54:45 addons-903003 crio[660]: time="2025-01-27 01:54:45.957832338Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e5700f43a7344cec1193500a12af914ca8abbae3c9d94804402d76597f7d47d,PodSandboxId:a6c73b95b65040aa9a9367b49dc24ec71342b24705665408dba5d7cc4966afbb,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3,State:CONTAINER_RUNNING,CreatedAt:1737942747712062811,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 67533d9e-df51-412f-a10f-c3983796b129,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56a88a7782117c71bb038043f49bec34a3835832128067bbd050b03f77c63b72,PodSandboxId:7d08bd19208561b817accbf916e7797e76df8f6a4d74879bb172e2fc63452640,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1737942696973346421,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1212f3bd-35e3-41c2-9a82-bcfd56ffc644,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2c45d7d3d054bf5e48326b5fcaa8a622e5632dd878d38378a037ac87abadb4a,PodSandboxId:0485f7aee138b83e2e61cfe9b470bc30db86eea0ea06c71afa893f6d7b0b89ff,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1737942686497620602,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-xznn8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f7a4b017-9d0e-4cd1-b526-f7be059a71a6,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:57ceb56c95c04e53e43c18b1333bfc4ff54e3c07bd559560e9d0af796f90d4a1,PodSandboxId:490c3c0803bc5c1971e76b2214b63f202ebeaf6d7b10b55965040e38f16919e8,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737942615565813460,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dpms5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 687ff59f-9141-426d-ac55-9e503670a652,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be61b24573a90ce0bf5269ffd6cfd6c92f034003a119c0dec93d3de33bd69525,PodSandboxId:f6ca4144c89446e49fa6091a785b7a0af6c5c79db2d981608f4ed0ae74d9e139,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737942615435884368,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-hjnbs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e1606cf8-eec5-4a7a-b37c-88197bbd6371,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bf157605b382d43bf327733644428c88446173935028bad34f970cf9d0bb2ca,PodSandboxId:8974f3a420738331f12d6d8c6e18d16a2ea0c0119bc9d188e5f17ec233fe55de,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1737942555238762422,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-wqktz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9013ece5-ea83-4c27-8d8e-a446d722ef47,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19744d14520dce16cf77234d30279887f28cbe2fc9e9917321f4a270fb7b63be,PodSandboxId:f9c2627436195c27a743527925ef4b95a8b10c5525404d1a80e898882e957a18,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de3
5f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1737942552571958488,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d7f42e5-7282-4522-a482-d389191a1a9b,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3a546b9ecb226863dae429d6e56f6a3121105316a155f1d38f896f25755c0f8,PodSandboxId:9b834ed17dd6c2842440d7d7d952710b2fbc5ce3ca7dd4710b96c55ad3d81281,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737942542386827174,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e1d34de-6bc3-42ea-b921-70e7f532d594,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2d11ecfb2fdac68d4a57cf7961417f17bc4c891986a83d3f913d7b3fa84f4fc,PodSandboxId:2db8283e79ee364bf9626c8f6d964d54f3b3a63f286beb1a07e212fbb1a715c9,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737942540564526191,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-nwb4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9cae9ba-d093-43cf-9a19-8f028de96946,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:c2116d528ea98dad2c7c0944717a49cfcf0cc34384ee311de1399a92a004aab0,PodSandboxId:dfb059772a3468466a76b43bf5d3eaf4bd5b7e7f56a62259c18cd2d132ab98ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737942537474961477,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vb6sz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b775e84-eda4-43e3-8e2a-2bfdcd9da2d3,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de737e59fe297b33460fc65a4cd9
a166510b84f45473e8462e20570acfde5d49,PodSandboxId:c2f927956b9a4130246f6f5403f018b3d7c69d370c6e9d9b0ff3bceb91bbc0d8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737942526809858952,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-903003,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5819a4e0c9422c66a007673dd763eed7,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac6c177969bb04a4f8c6c984ec988869973e88f4eccc3
0fa88772b397cd0b272,PodSandboxId:5ca4cfbc100e54cf81b07e95b17223b03a9bf72c33d05f9b5095506912d88478,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737942526765795891,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-903003,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b17897e26cd7516652e8f65ed8105e4,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b075bdc01dece618b853b45378831952cd05ecb82ddeaf1db3c2ca3386134b2,PodSandboxId:83b53fba0e5815
529084e4c404f3fcc9ff8a9d01167a69a6b0a47dacd88c35af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737942526748236972,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-903003,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d6b9fbac6ddc16ca9066be46f7c53c6,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d19f73e994612e11b8d2216395d21ff36d696e430d03d3c56d2a67274733289,PodSandboxId:ddafbc366230bbb616ca40259627e1e
7ef2726a08d697c72c16701759b02f085,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737942526699109749,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-903003,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 968b3db183219ef8850c535b2c371256,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4f557b4c-6e16-48b9-9f2e-836936b410d6 name=/runtime.v1.RuntimeServ
ice/ListContainers
	Jan 27 01:54:45 addons-903003 crio[660]: time="2025-01-27 01:54:45.991160419Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e8bf38ed-8556-46f9-a0bd-9f4ef36fd1f1 name=/runtime.v1.RuntimeService/Version
	Jan 27 01:54:45 addons-903003 crio[660]: time="2025-01-27 01:54:45.991251216Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e8bf38ed-8556-46f9-a0bd-9f4ef36fd1f1 name=/runtime.v1.RuntimeService/Version
	Jan 27 01:54:45 addons-903003 crio[660]: time="2025-01-27 01:54:45.992498734Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=062b48c5-d7b3-4d4f-9263-e7a53249e806 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 01:54:45 addons-903003 crio[660]: time="2025-01-27 01:54:45.994154310Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737942885994117619,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=062b48c5-d7b3-4d4f-9263-e7a53249e806 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 01:54:45 addons-903003 crio[660]: time="2025-01-27 01:54:45.994806474Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bf1756cb-8e47-493c-8f38-f1124b9e42dc name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 01:54:45 addons-903003 crio[660]: time="2025-01-27 01:54:45.994889680Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bf1756cb-8e47-493c-8f38-f1124b9e42dc name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 01:54:45 addons-903003 crio[660]: time="2025-01-27 01:54:45.995218372Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e5700f43a7344cec1193500a12af914ca8abbae3c9d94804402d76597f7d47d,PodSandboxId:a6c73b95b65040aa9a9367b49dc24ec71342b24705665408dba5d7cc4966afbb,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3,State:CONTAINER_RUNNING,CreatedAt:1737942747712062811,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 67533d9e-df51-412f-a10f-c3983796b129,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56a88a7782117c71bb038043f49bec34a3835832128067bbd050b03f77c63b72,PodSandboxId:7d08bd19208561b817accbf916e7797e76df8f6a4d74879bb172e2fc63452640,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1737942696973346421,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1212f3bd-35e3-41c2-9a82-bcfd56ffc644,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2c45d7d3d054bf5e48326b5fcaa8a622e5632dd878d38378a037ac87abadb4a,PodSandboxId:0485f7aee138b83e2e61cfe9b470bc30db86eea0ea06c71afa893f6d7b0b89ff,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1737942686497620602,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-xznn8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f7a4b017-9d0e-4cd1-b526-f7be059a71a6,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:57ceb56c95c04e53e43c18b1333bfc4ff54e3c07bd559560e9d0af796f90d4a1,PodSandboxId:490c3c0803bc5c1971e76b2214b63f202ebeaf6d7b10b55965040e38f16919e8,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737942615565813460,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dpms5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 687ff59f-9141-426d-ac55-9e503670a652,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be61b24573a90ce0bf5269ffd6cfd6c92f034003a119c0dec93d3de33bd69525,PodSandboxId:f6ca4144c89446e49fa6091a785b7a0af6c5c79db2d981608f4ed0ae74d9e139,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737942615435884368,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-hjnbs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e1606cf8-eec5-4a7a-b37c-88197bbd6371,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bf157605b382d43bf327733644428c88446173935028bad34f970cf9d0bb2ca,PodSandboxId:8974f3a420738331f12d6d8c6e18d16a2ea0c0119bc9d188e5f17ec233fe55de,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1737942555238762422,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-wqktz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9013ece5-ea83-4c27-8d8e-a446d722ef47,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19744d14520dce16cf77234d30279887f28cbe2fc9e9917321f4a270fb7b63be,PodSandboxId:f9c2627436195c27a743527925ef4b95a8b10c5525404d1a80e898882e957a18,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de3
5f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1737942552571958488,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d7f42e5-7282-4522-a482-d389191a1a9b,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3a546b9ecb226863dae429d6e56f6a3121105316a155f1d38f896f25755c0f8,PodSandboxId:9b834ed17dd6c2842440d7d7d952710b2fbc5ce3ca7dd4710b96c55ad3d81281,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737942542386827174,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e1d34de-6bc3-42ea-b921-70e7f532d594,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2d11ecfb2fdac68d4a57cf7961417f17bc4c891986a83d3f913d7b3fa84f4fc,PodSandboxId:2db8283e79ee364bf9626c8f6d964d54f3b3a63f286beb1a07e212fbb1a715c9,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737942540564526191,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-nwb4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9cae9ba-d093-43cf-9a19-8f028de96946,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:c2116d528ea98dad2c7c0944717a49cfcf0cc34384ee311de1399a92a004aab0,PodSandboxId:dfb059772a3468466a76b43bf5d3eaf4bd5b7e7f56a62259c18cd2d132ab98ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737942537474961477,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vb6sz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b775e84-eda4-43e3-8e2a-2bfdcd9da2d3,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de737e59fe297b33460fc65a4cd9
a166510b84f45473e8462e20570acfde5d49,PodSandboxId:c2f927956b9a4130246f6f5403f018b3d7c69d370c6e9d9b0ff3bceb91bbc0d8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737942526809858952,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-903003,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5819a4e0c9422c66a007673dd763eed7,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac6c177969bb04a4f8c6c984ec988869973e88f4eccc3
0fa88772b397cd0b272,PodSandboxId:5ca4cfbc100e54cf81b07e95b17223b03a9bf72c33d05f9b5095506912d88478,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737942526765795891,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-903003,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b17897e26cd7516652e8f65ed8105e4,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b075bdc01dece618b853b45378831952cd05ecb82ddeaf1db3c2ca3386134b2,PodSandboxId:83b53fba0e5815
529084e4c404f3fcc9ff8a9d01167a69a6b0a47dacd88c35af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737942526748236972,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-903003,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d6b9fbac6ddc16ca9066be46f7c53c6,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d19f73e994612e11b8d2216395d21ff36d696e430d03d3c56d2a67274733289,PodSandboxId:ddafbc366230bbb616ca40259627e1e
7ef2726a08d697c72c16701759b02f085,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737942526699109749,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-903003,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 968b3db183219ef8850c535b2c371256,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bf1756cb-8e47-493c-8f38-f1124b9e42dc name=/runtime.v1.RuntimeServ
ice/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9e5700f43a734       docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901                              2 minutes ago       Running             nginx                     0                   a6c73b95b6504       nginx
	56a88a7782117       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   7d08bd1920856       busybox
	c2c45d7d3d054       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago       Running             controller                0                   0485f7aee138b       ingress-nginx-controller-56d7c84fd4-xznn8
	57ceb56c95c04       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   4 minutes ago       Exited              patch                     0                   490c3c0803bc5       ingress-nginx-admission-patch-dpms5
	be61b24573a90       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   4 minutes ago       Exited              create                    0                   f6ca4144c8944       ingress-nginx-admission-create-hjnbs
	2bf157605b382       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     5 minutes ago       Running             amd-gpu-device-plugin     0                   8974f3a420738       amd-gpu-device-plugin-wqktz
	19744d14520dc       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             5 minutes ago       Running             minikube-ingress-dns      0                   f9c2627436195       kube-ingress-dns-minikube
	e3a546b9ecb22       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   9b834ed17dd6c       storage-provisioner
	b2d11ecfb2fda       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             5 minutes ago       Running             coredns                   0                   2db8283e79ee3       coredns-668d6bf9bc-nwb4s
	c2116d528ea98       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a                                                             5 minutes ago       Running             kube-proxy                0                   dfb059772a346       kube-proxy-vb6sz
	de737e59fe297       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1                                                             5 minutes ago       Running             kube-scheduler            0                   c2f927956b9a4       kube-scheduler-addons-903003
	ac6c177969bb0       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                             5 minutes ago       Running             etcd                      0                   5ca4cfbc100e5       etcd-addons-903003
	5b075bdc01dec       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                                             5 minutes ago       Running             kube-apiserver            0                   83b53fba0e581       kube-apiserver-addons-903003
	2d19f73e99461       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35                                                             5 minutes ago       Running             kube-controller-manager   0                   ddafbc366230b       kube-controller-manager-addons-903003
	
	
	==> coredns [b2d11ecfb2fdac68d4a57cf7961417f17bc4c891986a83d3f913d7b3fa84f4fc] <==
	[INFO] 10.244.0.8:44238 - 5781 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000113827s
	[INFO] 10.244.0.8:44238 - 61775 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000116134s
	[INFO] 10.244.0.8:44238 - 45925 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000072935s
	[INFO] 10.244.0.8:44238 - 4651 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000089126s
	[INFO] 10.244.0.8:44238 - 48380 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000058796s
	[INFO] 10.244.0.8:44238 - 37963 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000097225s
	[INFO] 10.244.0.8:44238 - 30784 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000083304s
	[INFO] 10.244.0.8:38625 - 13364 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000102996s
	[INFO] 10.244.0.8:38625 - 13675 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000120499s
	[INFO] 10.244.0.8:45309 - 22195 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000125118s
	[INFO] 10.244.0.8:45309 - 22471 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000129031s
	[INFO] 10.244.0.8:38494 - 46356 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000064954s
	[INFO] 10.244.0.8:38494 - 46846 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000085789s
	[INFO] 10.244.0.8:42926 - 38683 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000066676s
	[INFO] 10.244.0.8:42926 - 39133 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000116318s
	[INFO] 10.244.0.23:36099 - 14528 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000523044s
	[INFO] 10.244.0.23:39739 - 6384 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000211727s
	[INFO] 10.244.0.23:42172 - 30087 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000195334s
	[INFO] 10.244.0.23:49260 - 50962 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000175376s
	[INFO] 10.244.0.23:42260 - 7810 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000330019s
	[INFO] 10.244.0.23:41105 - 15194 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000186557s
	[INFO] 10.244.0.23:46595 - 46368 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001218261s
	[INFO] 10.244.0.23:41761 - 64677 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.001747532s
	[INFO] 10.244.0.26:33128 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000300047s
	[INFO] 10.244.0.26:33696 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000173251s
	
	
	==> describe nodes <==
	Name:               addons-903003
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-903003
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6bb462d349d93b9bf1c5a4f87817e5e9ea11cc95
	                    minikube.k8s.io/name=addons-903003
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T01_48_52_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-903003
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 01:48:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-903003
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 01:54:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 01:52:58 +0000   Mon, 27 Jan 2025 01:48:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 01:52:58 +0000   Mon, 27 Jan 2025 01:48:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 01:52:58 +0000   Mon, 27 Jan 2025 01:48:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 01:52:58 +0000   Mon, 27 Jan 2025 01:48:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.61
	  Hostname:    addons-903003
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 cbaa0bde20fa422e880b8f2001de46a6
	  System UUID:                cbaa0bde-20fa-422e-880b-8f2001de46a6
	  Boot ID:                    08cb80fb-755d-450d-aeef-b01a55e6f702
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m13s
	  default                     hello-world-app-7d9564db4-lckw7              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  ingress-nginx               ingress-nginx-controller-56d7c84fd4-xznn8    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         5m42s
	  kube-system                 amd-gpu-device-plugin-wqktz                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m48s
	  kube-system                 coredns-668d6bf9bc-nwb4s                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m50s
	  kube-system                 etcd-addons-903003                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m55s
	  kube-system                 kube-apiserver-addons-903003                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m56s
	  kube-system                 kube-controller-manager-addons-903003        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m55s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m46s
	  kube-system                 kube-proxy-vb6sz                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m50s
	  kube-system                 kube-scheduler-addons-903003                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m55s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m47s  kube-proxy       
	  Normal  Starting                 5m55s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m55s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m55s  kubelet          Node addons-903003 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m55s  kubelet          Node addons-903003 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m55s  kubelet          Node addons-903003 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m54s  kubelet          Node addons-903003 status is now: NodeReady
	  Normal  RegisteredNode           5m51s  node-controller  Node addons-903003 event: Registered Node addons-903003 in Controller
	  Normal  CIDRAssignmentFailed     5m51s  cidrAllocator    Node addons-903003 status is now: CIDRAssignmentFailed
	
	
	==> dmesg <==
	[  +5.478513] systemd-fstab-generator[1217]: Ignoring "noauto" option for root device
	[  +0.076091] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.357223] systemd-fstab-generator[1357]: Ignoring "noauto" option for root device
	[  +0.185781] kauditd_printk_skb: 21 callbacks suppressed
	[Jan27 01:49] kauditd_printk_skb: 122 callbacks suppressed
	[  +5.015779] kauditd_printk_skb: 141 callbacks suppressed
	[  +5.598627] kauditd_printk_skb: 68 callbacks suppressed
	[ +40.736640] kauditd_printk_skb: 2 callbacks suppressed
	[Jan27 01:50] kauditd_printk_skb: 11 callbacks suppressed
	[  +6.800379] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.066083] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.772881] kauditd_printk_skb: 35 callbacks suppressed
	[Jan27 01:51] kauditd_printk_skb: 3 callbacks suppressed
	[  +7.422314] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.519760] kauditd_printk_skb: 7 callbacks suppressed
	[ +13.808776] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.134975] kauditd_printk_skb: 6 callbacks suppressed
	[Jan27 01:52] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.125483] kauditd_printk_skb: 43 callbacks suppressed
	[  +5.277583] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.003074] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.238651] kauditd_printk_skb: 52 callbacks suppressed
	[  +5.127861] kauditd_printk_skb: 24 callbacks suppressed
	[ +17.517176] kauditd_printk_skb: 7 callbacks suppressed
	[  +9.108233] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [ac6c177969bb04a4f8c6c984ec988869973e88f4eccc30fa88772b397cd0b272] <==
	{"level":"info","ts":"2025-01-27T01:50:20.278671Z","caller":"traceutil/trace.go:171","msg":"trace[1749377833] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1033; }","duration":"155.115776ms","start":"2025-01-27T01:50:20.123546Z","end":"2025-01-27T01:50:20.278662Z","steps":["trace[1749377833] 'agreement among raft nodes before linearized reading'  (duration: 154.708142ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T01:51:59.128379Z","caller":"traceutil/trace.go:171","msg":"trace[2061404231] linearizableReadLoop","detail":"{readStateIndex:1423; appliedIndex:1422; }","duration":"152.833919ms","start":"2025-01-27T01:51:58.975509Z","end":"2025-01-27T01:51:59.128343Z","steps":["trace[2061404231] 'read index received'  (duration: 151.783728ms)","trace[2061404231] 'applied index is now lower than readState.Index'  (duration: 1.049586ms)"],"step_count":2}
	{"level":"info","ts":"2025-01-27T01:51:59.128878Z","caller":"traceutil/trace.go:171","msg":"trace[2118441984] transaction","detail":"{read_only:false; response_revision:1365; number_of_response:1; }","duration":"318.789094ms","start":"2025-01-27T01:51:58.810079Z","end":"2025-01-27T01:51:59.128868Z","steps":["trace[2118441984] 'process raft request'  (duration: 318.15944ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T01:51:59.129685Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T01:51:58.810056Z","time spent":"319.531469ms","remote":"127.0.0.1:52034","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1357 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:421 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"warn","ts":"2025-01-27T01:51:59.129527Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"153.860907ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io\" limit:1 ","response":"range_response_count:1 size:2270"}
	{"level":"info","ts":"2025-01-27T01:51:59.130756Z","caller":"traceutil/trace.go:171","msg":"trace[1497740313] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io; range_end:; response_count:1; response_revision:1365; }","duration":"155.254943ms","start":"2025-01-27T01:51:58.975485Z","end":"2025-01-27T01:51:59.130740Z","steps":["trace[1497740313] 'agreement among raft nodes before linearized reading'  (duration: 153.756914ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T01:52:11.456644Z","caller":"traceutil/trace.go:171","msg":"trace[1935046848] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1481; }","duration":"229.824336ms","start":"2025-01-27T01:52:11.226792Z","end":"2025-01-27T01:52:11.456616Z","steps":["trace[1935046848] 'process raft request'  (duration: 229.721998ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T01:52:11.456745Z","caller":"traceutil/trace.go:171","msg":"trace[679261075] linearizableReadLoop","detail":"{readStateIndex:1547; appliedIndex:1547; }","duration":"229.075486ms","start":"2025-01-27T01:52:11.227658Z","end":"2025-01-27T01:52:11.456733Z","steps":["trace[679261075] 'read index received'  (duration: 229.069976ms)","trace[679261075] 'applied index is now lower than readState.Index'  (duration: 4.612µs)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T01:52:11.456894Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"229.2044ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T01:52:11.456916Z","caller":"traceutil/trace.go:171","msg":"trace[1958359521] range","detail":"{range_begin:/registry/endpointslices; range_end:; response_count:0; response_revision:1481; }","duration":"229.271103ms","start":"2025-01-27T01:52:11.227639Z","end":"2025-01-27T01:52:11.456910Z","steps":["trace[1958359521] 'agreement among raft nodes before linearized reading'  (duration: 229.17661ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T01:52:11.542642Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.472292ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T01:52:11.542701Z","caller":"traceutil/trace.go:171","msg":"trace[1681345043] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1482; }","duration":"126.56936ms","start":"2025-01-27T01:52:11.416121Z","end":"2025-01-27T01:52:11.542691Z","steps":["trace[1681345043] 'agreement among raft nodes before linearized reading'  (duration: 126.449569ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T01:52:11.542828Z","caller":"traceutil/trace.go:171","msg":"trace[1324439413] transaction","detail":"{read_only:false; response_revision:1482; number_of_response:1; }","duration":"313.404728ms","start":"2025-01-27T01:52:11.229417Z","end":"2025-01-27T01:52:11.542822Z","steps":["trace[1324439413] 'process raft request'  (duration: 309.86997ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T01:52:11.542886Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T01:52:11.229406Z","time spent":"313.438393ms","remote":"127.0.0.1:51938","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1466 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2025-01-27T01:52:12.946731Z","caller":"traceutil/trace.go:171","msg":"trace[890692460] linearizableReadLoop","detail":"{readStateIndex:1572; appliedIndex:1571; }","duration":"180.680125ms","start":"2025-01-27T01:52:12.766036Z","end":"2025-01-27T01:52:12.946716Z","steps":["trace[890692460] 'read index received'  (duration: 180.476501ms)","trace[890692460] 'applied index is now lower than readState.Index'  (duration: 203.148µs)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T01:52:12.946819Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"180.766552ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T01:52:12.946839Z","caller":"traceutil/trace.go:171","msg":"trace[909939073] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1506; }","duration":"180.85345ms","start":"2025-01-27T01:52:12.765979Z","end":"2025-01-27T01:52:12.946832Z","steps":["trace[909939073] 'agreement among raft nodes before linearized reading'  (duration: 180.803955ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T01:52:12.946940Z","caller":"traceutil/trace.go:171","msg":"trace[340847078] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1506; }","duration":"196.890383ms","start":"2025-01-27T01:52:12.750022Z","end":"2025-01-27T01:52:12.946913Z","steps":["trace[340847078] 'process raft request'  (duration: 196.578328ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T01:52:26.705400Z","caller":"traceutil/trace.go:171","msg":"trace[1149154207] linearizableReadLoop","detail":"{readStateIndex:1729; appliedIndex:1728; }","duration":"289.800524ms","start":"2025-01-27T01:52:26.415586Z","end":"2025-01-27T01:52:26.705386Z","steps":["trace[1149154207] 'read index received'  (duration: 289.702913ms)","trace[1149154207] 'applied index is now lower than readState.Index'  (duration: 97.072µs)"],"step_count":2}
	{"level":"info","ts":"2025-01-27T01:52:26.705615Z","caller":"traceutil/trace.go:171","msg":"trace[737170409] transaction","detail":"{read_only:false; response_revision:1654; number_of_response:1; }","duration":"303.963332ms","start":"2025-01-27T01:52:26.401643Z","end":"2025-01-27T01:52:26.705606Z","steps":["trace[737170409] 'process raft request'  (duration: 303.607539ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T01:52:26.705694Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T01:52:26.401628Z","time spent":"304.02001ms","remote":"127.0.0.1:52102","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":950,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/storageclasses/local-path\" mod_revision:490 > success:<request_put:<key:\"/registry/storageclasses/local-path\" value_size:907 >> failure:<request_range:<key:\"/registry/storageclasses/local-path\" > >"}
	{"level":"warn","ts":"2025-01-27T01:52:26.705858Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"290.27268ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T01:52:26.705890Z","caller":"traceutil/trace.go:171","msg":"trace[2054551304] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1654; }","duration":"290.304067ms","start":"2025-01-27T01:52:26.415580Z","end":"2025-01-27T01:52:26.705884Z","steps":["trace[2054551304] 'agreement among raft nodes before linearized reading'  (duration: 290.26441ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T01:52:26.705694Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.785974ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T01:52:26.706499Z","caller":"traceutil/trace.go:171","msg":"trace[70051785] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1654; }","duration":"108.240407ms","start":"2025-01-27T01:52:26.597890Z","end":"2025-01-27T01:52:26.706130Z","steps":["trace[70051785] 'agreement among raft nodes before linearized reading'  (duration: 107.676128ms)"],"step_count":1}
	
	
	==> kernel <==
	 01:54:46 up 6 min,  0 users,  load average: 0.19, 0.64, 0.39
	Linux addons-903003 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5b075bdc01dece618b853b45378831952cd05ecb82ddeaf1db3c2ca3386134b2] <==
	I0127 01:50:03.305475       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0127 01:51:43.685251       1 conn.go:339] Error on socket receive: read tcp 192.168.39.61:8443->192.168.39.1:56734: use of closed network connection
	E0127 01:51:43.867772       1 conn.go:339] Error on socket receive: read tcp 192.168.39.61:8443->192.168.39.1:56760: use of closed network connection
	I0127 01:51:53.126697       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.129.39"}
	I0127 01:52:04.238400       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0127 01:52:20.362174       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0127 01:52:20.589584       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.98.30"}
	I0127 01:52:20.705767       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0127 01:52:24.814893       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0127 01:52:25.849168       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0127 01:52:42.901616       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0127 01:52:49.533825       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0127 01:52:49.533879       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0127 01:52:49.566282       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0127 01:52:49.566346       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0127 01:52:49.652941       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0127 01:52:49.653553       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0127 01:52:49.695336       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0127 01:52:49.695387       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0127 01:52:49.764681       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0127 01:52:49.765044       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0127 01:52:50.696340       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0127 01:52:50.765056       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0127 01:52:50.799781       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0127 01:54:44.868172       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.20.242"}
	
	
	==> kube-controller-manager [2d19f73e994612e11b8d2216395d21ff36d696e430d03d3c56d2a67274733289] <==
	E0127 01:53:37.121039       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0127 01:53:57.853133       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0127 01:53:57.853923       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0127 01:53:57.854621       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0127 01:53:57.854687       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0127 01:54:11.291094       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0127 01:54:11.292066       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0127 01:54:11.292929       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0127 01:54:11.293045       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0127 01:54:11.735377       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0127 01:54:11.736351       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
	W0127 01:54:11.737057       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0127 01:54:11.737094       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0127 01:54:23.037743       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0127 01:54:23.038774       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0127 01:54:23.039596       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0127 01:54:23.039664       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0127 01:54:44.569312       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0127 01:54:44.570280       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0127 01:54:44.571650       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0127 01:54:44.571757       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0127 01:54:44.690343       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="41.871111ms"
	I0127 01:54:44.707276       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="16.697913ms"
	I0127 01:54:44.708339       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="90.526µs"
	I0127 01:54:44.710322       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="27.726µs"
	
	
	==> kube-proxy [c2116d528ea98dad2c7c0944717a49cfcf0cc34384ee311de1399a92a004aab0] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 01:48:58.280957       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 01:48:58.295367       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.61"]
	E0127 01:48:58.295431       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 01:48:58.391768       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 01:48:58.391844       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 01:48:58.391867       1 server_linux.go:170] "Using iptables Proxier"
	I0127 01:48:58.398772       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 01:48:58.399083       1 server.go:497] "Version info" version="v1.32.1"
	I0127 01:48:58.399096       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 01:48:58.400386       1 config.go:199] "Starting service config controller"
	I0127 01:48:58.400407       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 01:48:58.400434       1 config.go:105] "Starting endpoint slice config controller"
	I0127 01:48:58.400437       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 01:48:58.401116       1 config.go:329] "Starting node config controller"
	I0127 01:48:58.401125       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 01:48:58.501138       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 01:48:58.501184       1 shared_informer.go:320] Caches are synced for node config
	I0127 01:48:58.501191       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [de737e59fe297b33460fc65a4cd9a166510b84f45473e8462e20570acfde5d49] <==
	W0127 01:48:49.129939       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0127 01:48:49.129974       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 01:48:49.130151       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0127 01:48:49.130233       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 01:48:49.951731       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0127 01:48:49.951817       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 01:48:49.973874       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0127 01:48:49.973961       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 01:48:49.992135       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0127 01:48:49.992206       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 01:48:50.056780       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0127 01:48:50.056969       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 01:48:50.066424       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0127 01:48:50.066513       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 01:48:50.119212       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0127 01:48:50.119257       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 01:48:50.205861       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0127 01:48:50.206042       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 01:48:50.295403       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0127 01:48:50.295495       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 01:48:50.387715       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0127 01:48:50.388710       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 01:48:50.392028       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0127 01:48:50.392066       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 01:48:50.720627       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 01:53:52 addons-903003 kubelet[1225]: E0127 01:53:52.019749    1225 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737942832019336297,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 01:53:58 addons-903003 kubelet[1225]: I0127 01:53:58.616824    1225 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Jan 27 01:54:02 addons-903003 kubelet[1225]: E0127 01:54:02.022669    1225 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737942842022209357,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 01:54:02 addons-903003 kubelet[1225]: E0127 01:54:02.022773    1225 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737942842022209357,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 01:54:12 addons-903003 kubelet[1225]: E0127 01:54:12.025256    1225 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737942852024658436,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 01:54:12 addons-903003 kubelet[1225]: E0127 01:54:12.025530    1225 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737942852024658436,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 01:54:22 addons-903003 kubelet[1225]: E0127 01:54:22.027741    1225 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737942862027440480,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 01:54:22 addons-903003 kubelet[1225]: E0127 01:54:22.028078    1225 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737942862027440480,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 01:54:32 addons-903003 kubelet[1225]: E0127 01:54:32.032814    1225 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737942872032140140,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 01:54:32 addons-903003 kubelet[1225]: E0127 01:54:32.032857    1225 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737942872032140140,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 01:54:42 addons-903003 kubelet[1225]: E0127 01:54:42.035782    1225 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737942882035392704,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 01:54:42 addons-903003 kubelet[1225]: E0127 01:54:42.035829    1225 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737942882035392704,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 01:54:44 addons-903003 kubelet[1225]: I0127 01:54:44.684979    1225 memory_manager.go:355] "RemoveStaleState removing state" podUID="597dba22-bb11-4ce8-bbce-c97797ccffdc" containerName="node-driver-registrar"
	Jan 27 01:54:44 addons-903003 kubelet[1225]: I0127 01:54:44.685347    1225 memory_manager.go:355] "RemoveStaleState removing state" podUID="0e8e5994-ea5e-41cf-af67-aa25e57fbb4c" containerName="local-path-provisioner"
	Jan 27 01:54:44 addons-903003 kubelet[1225]: I0127 01:54:44.685397    1225 memory_manager.go:355] "RemoveStaleState removing state" podUID="597dba22-bb11-4ce8-bbce-c97797ccffdc" containerName="hostpath"
	Jan 27 01:54:44 addons-903003 kubelet[1225]: I0127 01:54:44.685429    1225 memory_manager.go:355] "RemoveStaleState removing state" podUID="597dba22-bb11-4ce8-bbce-c97797ccffdc" containerName="liveness-probe"
	Jan 27 01:54:44 addons-903003 kubelet[1225]: I0127 01:54:44.685459    1225 memory_manager.go:355] "RemoveStaleState removing state" podUID="597dba22-bb11-4ce8-bbce-c97797ccffdc" containerName="csi-snapshotter"
	Jan 27 01:54:44 addons-903003 kubelet[1225]: I0127 01:54:44.685490    1225 memory_manager.go:355] "RemoveStaleState removing state" podUID="020e6c29-561d-4141-9ef6-39c6e757894e" containerName="volume-snapshot-controller"
	Jan 27 01:54:44 addons-903003 kubelet[1225]: I0127 01:54:44.685520    1225 memory_manager.go:355] "RemoveStaleState removing state" podUID="34218051-32b9-4cc4-9774-7bb55de887b6" containerName="volume-snapshot-controller"
	Jan 27 01:54:44 addons-903003 kubelet[1225]: I0127 01:54:44.685550    1225 memory_manager.go:355] "RemoveStaleState removing state" podUID="7b2cb23c-c4c6-4c1f-9802-50368d92e77d" containerName="task-pv-container"
	Jan 27 01:54:44 addons-903003 kubelet[1225]: I0127 01:54:44.685582    1225 memory_manager.go:355] "RemoveStaleState removing state" podUID="cdd9382d-234a-4225-a25f-fc9ba54f929d" containerName="csi-attacher"
	Jan 27 01:54:44 addons-903003 kubelet[1225]: I0127 01:54:44.685614    1225 memory_manager.go:355] "RemoveStaleState removing state" podUID="5bc032a0-aba8-4036-aa26-bbf0e9238342" containerName="csi-resizer"
	Jan 27 01:54:44 addons-903003 kubelet[1225]: I0127 01:54:44.685646    1225 memory_manager.go:355] "RemoveStaleState removing state" podUID="597dba22-bb11-4ce8-bbce-c97797ccffdc" containerName="csi-external-health-monitor-controller"
	Jan 27 01:54:44 addons-903003 kubelet[1225]: I0127 01:54:44.685677    1225 memory_manager.go:355] "RemoveStaleState removing state" podUID="597dba22-bb11-4ce8-bbce-c97797ccffdc" containerName="csi-provisioner"
	Jan 27 01:54:44 addons-903003 kubelet[1225]: I0127 01:54:44.773117    1225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7wdg\" (UniqueName: \"kubernetes.io/projected/bf21e707-d5b5-4d2a-8551-f8680a651b7b-kube-api-access-f7wdg\") pod \"hello-world-app-7d9564db4-lckw7\" (UID: \"bf21e707-d5b5-4d2a-8551-f8680a651b7b\") " pod="default/hello-world-app-7d9564db4-lckw7"
	
	
	==> storage-provisioner [e3a546b9ecb226863dae429d6e56f6a3121105316a155f1d38f896f25755c0f8] <==
	I0127 01:49:03.060091       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0127 01:49:03.150519       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0127 01:49:03.150831       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0127 01:49:03.279318       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0127 01:49:03.279459       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-903003_1ff4b184-cc22-4253-b15c-4cfdbdc5a754!
	I0127 01:49:03.290209       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"47926c68-5345-47cf-99cb-b6dd3330e827", APIVersion:"v1", ResourceVersion:"589", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-903003_1ff4b184-cc22-4253-b15c-4cfdbdc5a754 became leader
	I0127 01:49:03.599736       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-903003_1ff4b184-cc22-4253-b15c-4cfdbdc5a754!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-903003 -n addons-903003
helpers_test.go:261: (dbg) Run:  kubectl --context addons-903003 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-7d9564db4-lckw7 ingress-nginx-admission-create-hjnbs ingress-nginx-admission-patch-dpms5
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-903003 describe pod hello-world-app-7d9564db4-lckw7 ingress-nginx-admission-create-hjnbs ingress-nginx-admission-patch-dpms5
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-903003 describe pod hello-world-app-7d9564db4-lckw7 ingress-nginx-admission-create-hjnbs ingress-nginx-admission-patch-dpms5: exit status 1 (67.398935ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-7d9564db4-lckw7
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-903003/192.168.39.61
	Start Time:       Mon, 27 Jan 2025 01:54:44 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=7d9564db4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-7d9564db4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f7wdg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-f7wdg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-7d9564db4-lckw7 to addons-903003
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-hjnbs" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-dpms5" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-903003 describe pod hello-world-app-7d9564db4-lckw7 ingress-nginx-admission-create-hjnbs ingress-nginx-admission-patch-dpms5: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-903003 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-903003 addons disable ingress-dns --alsologtostderr -v=1: (1.172591675s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-903003 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-903003 addons disable ingress --alsologtostderr -v=1: (7.669624154s)
--- FAIL: TestAddons/parallel/Ingress (155.94s)

                                                
                                    
x
+
TestPreload (227.28s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-445920 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-445920 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m3.455866895s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-445920 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-445920 image pull gcr.io/k8s-minikube/busybox: (3.246600597s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-445920
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-445920: (7.29044569s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-445920 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0127 02:44:48.566595  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/functional-308251/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-445920 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m30.294851065s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-445920 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2025-01-27 02:46:01.402213289 +0000 UTC m=+3508.923970191
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-445920 -n test-preload-445920
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-445920 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-445920 logs -n 25: (1.149497145s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-207207 ssh -n                                                                 | multinode-207207     | jenkins | v1.35.0 | 27 Jan 25 02:30 UTC | 27 Jan 25 02:30 UTC |
	|         | multinode-207207-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-207207 ssh -n multinode-207207 sudo cat                                       | multinode-207207     | jenkins | v1.35.0 | 27 Jan 25 02:30 UTC | 27 Jan 25 02:30 UTC |
	|         | /home/docker/cp-test_multinode-207207-m03_multinode-207207.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-207207 cp multinode-207207-m03:/home/docker/cp-test.txt                       | multinode-207207     | jenkins | v1.35.0 | 27 Jan 25 02:30 UTC | 27 Jan 25 02:30 UTC |
	|         | multinode-207207-m02:/home/docker/cp-test_multinode-207207-m03_multinode-207207-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-207207 ssh -n                                                                 | multinode-207207     | jenkins | v1.35.0 | 27 Jan 25 02:30 UTC | 27 Jan 25 02:30 UTC |
	|         | multinode-207207-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-207207 ssh -n multinode-207207-m02 sudo cat                                   | multinode-207207     | jenkins | v1.35.0 | 27 Jan 25 02:30 UTC | 27 Jan 25 02:30 UTC |
	|         | /home/docker/cp-test_multinode-207207-m03_multinode-207207-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-207207 node stop m03                                                          | multinode-207207     | jenkins | v1.35.0 | 27 Jan 25 02:30 UTC | 27 Jan 25 02:30 UTC |
	| node    | multinode-207207 node start                                                             | multinode-207207     | jenkins | v1.35.0 | 27 Jan 25 02:30 UTC | 27 Jan 25 02:31 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-207207                                                                | multinode-207207     | jenkins | v1.35.0 | 27 Jan 25 02:31 UTC |                     |
	| stop    | -p multinode-207207                                                                     | multinode-207207     | jenkins | v1.35.0 | 27 Jan 25 02:31 UTC | 27 Jan 25 02:34 UTC |
	| start   | -p multinode-207207                                                                     | multinode-207207     | jenkins | v1.35.0 | 27 Jan 25 02:34 UTC | 27 Jan 25 02:36 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-207207                                                                | multinode-207207     | jenkins | v1.35.0 | 27 Jan 25 02:36 UTC |                     |
	| node    | multinode-207207 node delete                                                            | multinode-207207     | jenkins | v1.35.0 | 27 Jan 25 02:36 UTC | 27 Jan 25 02:36 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-207207 stop                                                                   | multinode-207207     | jenkins | v1.35.0 | 27 Jan 25 02:36 UTC | 27 Jan 25 02:39 UTC |
	| start   | -p multinode-207207                                                                     | multinode-207207     | jenkins | v1.35.0 | 27 Jan 25 02:39 UTC | 27 Jan 25 02:41 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-207207                                                                | multinode-207207     | jenkins | v1.35.0 | 27 Jan 25 02:41 UTC |                     |
	| start   | -p multinode-207207-m02                                                                 | multinode-207207-m02 | jenkins | v1.35.0 | 27 Jan 25 02:41 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-207207-m03                                                                 | multinode-207207-m03 | jenkins | v1.35.0 | 27 Jan 25 02:41 UTC | 27 Jan 25 02:42 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-207207                                                                 | multinode-207207     | jenkins | v1.35.0 | 27 Jan 25 02:42 UTC |                     |
	| delete  | -p multinode-207207-m03                                                                 | multinode-207207-m03 | jenkins | v1.35.0 | 27 Jan 25 02:42 UTC | 27 Jan 25 02:42 UTC |
	| delete  | -p multinode-207207                                                                     | multinode-207207     | jenkins | v1.35.0 | 27 Jan 25 02:42 UTC | 27 Jan 25 02:42 UTC |
	| start   | -p test-preload-445920                                                                  | test-preload-445920  | jenkins | v1.35.0 | 27 Jan 25 02:42 UTC | 27 Jan 25 02:44 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-445920 image pull                                                          | test-preload-445920  | jenkins | v1.35.0 | 27 Jan 25 02:44 UTC | 27 Jan 25 02:44 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-445920                                                                  | test-preload-445920  | jenkins | v1.35.0 | 27 Jan 25 02:44 UTC | 27 Jan 25 02:44 UTC |
	| start   | -p test-preload-445920                                                                  | test-preload-445920  | jenkins | v1.35.0 | 27 Jan 25 02:44 UTC | 27 Jan 25 02:46 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-445920 image list                                                          | test-preload-445920  | jenkins | v1.35.0 | 27 Jan 25 02:46 UTC | 27 Jan 25 02:46 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 02:44:30
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 02:44:30.933214  936523 out.go:345] Setting OutFile to fd 1 ...
	I0127 02:44:30.933313  936523 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:44:30.933321  936523 out.go:358] Setting ErrFile to fd 2...
	I0127 02:44:30.933325  936523 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:44:30.933519  936523 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-897624/.minikube/bin
	I0127 02:44:30.934063  936523 out.go:352] Setting JSON to false
	I0127 02:44:30.934996  936523 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":12414,"bootTime":1737933457,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 02:44:30.935107  936523 start.go:139] virtualization: kvm guest
	I0127 02:44:30.937282  936523 out.go:177] * [test-preload-445920] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 02:44:30.938558  936523 out.go:177]   - MINIKUBE_LOCATION=20316
	I0127 02:44:30.938639  936523 notify.go:220] Checking for updates...
	I0127 02:44:30.940630  936523 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 02:44:30.942353  936523 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20316-897624/kubeconfig
	I0127 02:44:30.943520  936523 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-897624/.minikube
	I0127 02:44:30.944727  936523 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 02:44:30.945780  936523 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 02:44:30.947362  936523 config.go:182] Loaded profile config "test-preload-445920": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0127 02:44:30.947746  936523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 02:44:30.947793  936523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:44:30.962934  936523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37621
	I0127 02:44:30.963365  936523 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:44:30.963947  936523 main.go:141] libmachine: Using API Version  1
	I0127 02:44:30.963973  936523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:44:30.964310  936523 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:44:30.964531  936523 main.go:141] libmachine: (test-preload-445920) Calling .DriverName
	I0127 02:44:30.966108  936523 out.go:177] * Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	I0127 02:44:30.967069  936523 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 02:44:30.967402  936523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 02:44:30.967447  936523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:44:30.981810  936523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45105
	I0127 02:44:30.982297  936523 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:44:30.982797  936523 main.go:141] libmachine: Using API Version  1
	I0127 02:44:30.982814  936523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:44:30.983120  936523 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:44:30.983301  936523 main.go:141] libmachine: (test-preload-445920) Calling .DriverName
	I0127 02:44:31.017840  936523 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 02:44:31.019120  936523 start.go:297] selected driver: kvm2
	I0127 02:44:31.019134  936523 start.go:901] validating driver "kvm2" against &{Name:test-preload-445920 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-445920
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 02:44:31.019248  936523 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 02:44:31.019906  936523 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 02:44:31.019992  936523 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20316-897624/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 02:44:31.035547  936523 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 02:44:31.035928  936523 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 02:44:31.035959  936523 cni.go:84] Creating CNI manager for ""
	I0127 02:44:31.036013  936523 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 02:44:31.036072  936523 start.go:340] cluster config:
	{Name:test-preload-445920 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-445920 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 02:44:31.036209  936523 iso.go:125] acquiring lock: {Name:mkbbe08fa3daa2372045069667a0a52e0e34abd9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 02:44:31.037925  936523 out.go:177] * Starting "test-preload-445920" primary control-plane node in "test-preload-445920" cluster
	I0127 02:44:31.039061  936523 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0127 02:44:31.504852  936523 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0127 02:44:31.504906  936523 cache.go:56] Caching tarball of preloaded images
	I0127 02:44:31.505141  936523 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0127 02:44:31.506817  936523 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0127 02:44:31.507985  936523 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0127 02:44:31.608968  936523 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/20316-897624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0127 02:44:43.262590  936523 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0127 02:44:43.262696  936523 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20316-897624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0127 02:44:44.125575  936523 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0127 02:44:44.125733  936523 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/test-preload-445920/config.json ...
	I0127 02:44:44.126002  936523 start.go:360] acquireMachinesLock for test-preload-445920: {Name:mke71211974c740845fb9b00041df14f4e3ecd74 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 02:44:44.126090  936523 start.go:364] duration metric: took 56.825µs to acquireMachinesLock for "test-preload-445920"
	I0127 02:44:44.126110  936523 start.go:96] Skipping create...Using existing machine configuration
	I0127 02:44:44.126116  936523 fix.go:54] fixHost starting: 
	I0127 02:44:44.126401  936523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 02:44:44.126441  936523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:44:44.141729  936523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37701
	I0127 02:44:44.142178  936523 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:44:44.142680  936523 main.go:141] libmachine: Using API Version  1
	I0127 02:44:44.142704  936523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:44:44.143000  936523 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:44:44.143144  936523 main.go:141] libmachine: (test-preload-445920) Calling .DriverName
	I0127 02:44:44.143249  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetState
	I0127 02:44:44.144748  936523 fix.go:112] recreateIfNeeded on test-preload-445920: state=Stopped err=<nil>
	I0127 02:44:44.144775  936523 main.go:141] libmachine: (test-preload-445920) Calling .DriverName
	W0127 02:44:44.144939  936523 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 02:44:44.147026  936523 out.go:177] * Restarting existing kvm2 VM for "test-preload-445920" ...
	I0127 02:44:44.148189  936523 main.go:141] libmachine: (test-preload-445920) Calling .Start
	I0127 02:44:44.148422  936523 main.go:141] libmachine: (test-preload-445920) starting domain...
	I0127 02:44:44.148443  936523 main.go:141] libmachine: (test-preload-445920) ensuring networks are active...
	I0127 02:44:44.149186  936523 main.go:141] libmachine: (test-preload-445920) Ensuring network default is active
	I0127 02:44:44.149542  936523 main.go:141] libmachine: (test-preload-445920) Ensuring network mk-test-preload-445920 is active
	I0127 02:44:44.149874  936523 main.go:141] libmachine: (test-preload-445920) getting domain XML...
	I0127 02:44:44.150662  936523 main.go:141] libmachine: (test-preload-445920) creating domain...
	I0127 02:44:45.357297  936523 main.go:141] libmachine: (test-preload-445920) waiting for IP...
	I0127 02:44:45.358092  936523 main.go:141] libmachine: (test-preload-445920) DBG | domain test-preload-445920 has defined MAC address 52:54:00:36:d0:ff in network mk-test-preload-445920
	I0127 02:44:45.358473  936523 main.go:141] libmachine: (test-preload-445920) DBG | unable to find current IP address of domain test-preload-445920 in network mk-test-preload-445920
	I0127 02:44:45.358588  936523 main.go:141] libmachine: (test-preload-445920) DBG | I0127 02:44:45.358490  936606 retry.go:31] will retry after 211.843885ms: waiting for domain to come up
	I0127 02:44:45.572200  936523 main.go:141] libmachine: (test-preload-445920) DBG | domain test-preload-445920 has defined MAC address 52:54:00:36:d0:ff in network mk-test-preload-445920
	I0127 02:44:45.572637  936523 main.go:141] libmachine: (test-preload-445920) DBG | unable to find current IP address of domain test-preload-445920 in network mk-test-preload-445920
	I0127 02:44:45.572676  936523 main.go:141] libmachine: (test-preload-445920) DBG | I0127 02:44:45.572567  936606 retry.go:31] will retry after 323.41854ms: waiting for domain to come up
	I0127 02:44:45.898214  936523 main.go:141] libmachine: (test-preload-445920) DBG | domain test-preload-445920 has defined MAC address 52:54:00:36:d0:ff in network mk-test-preload-445920
	I0127 02:44:45.898656  936523 main.go:141] libmachine: (test-preload-445920) DBG | unable to find current IP address of domain test-preload-445920 in network mk-test-preload-445920
	I0127 02:44:45.898686  936523 main.go:141] libmachine: (test-preload-445920) DBG | I0127 02:44:45.898605  936606 retry.go:31] will retry after 411.455831ms: waiting for domain to come up
	I0127 02:44:46.311284  936523 main.go:141] libmachine: (test-preload-445920) DBG | domain test-preload-445920 has defined MAC address 52:54:00:36:d0:ff in network mk-test-preload-445920
	I0127 02:44:46.311681  936523 main.go:141] libmachine: (test-preload-445920) DBG | unable to find current IP address of domain test-preload-445920 in network mk-test-preload-445920
	I0127 02:44:46.311711  936523 main.go:141] libmachine: (test-preload-445920) DBG | I0127 02:44:46.311631  936606 retry.go:31] will retry after 451.128615ms: waiting for domain to come up
	I0127 02:44:46.764195  936523 main.go:141] libmachine: (test-preload-445920) DBG | domain test-preload-445920 has defined MAC address 52:54:00:36:d0:ff in network mk-test-preload-445920
	I0127 02:44:46.764589  936523 main.go:141] libmachine: (test-preload-445920) DBG | unable to find current IP address of domain test-preload-445920 in network mk-test-preload-445920
	I0127 02:44:46.764616  936523 main.go:141] libmachine: (test-preload-445920) DBG | I0127 02:44:46.764562  936606 retry.go:31] will retry after 760.996314ms: waiting for domain to come up
	I0127 02:44:47.527849  936523 main.go:141] libmachine: (test-preload-445920) DBG | domain test-preload-445920 has defined MAC address 52:54:00:36:d0:ff in network mk-test-preload-445920
	I0127 02:44:47.528210  936523 main.go:141] libmachine: (test-preload-445920) DBG | unable to find current IP address of domain test-preload-445920 in network mk-test-preload-445920
	I0127 02:44:47.528257  936523 main.go:141] libmachine: (test-preload-445920) DBG | I0127 02:44:47.528211  936606 retry.go:31] will retry after 573.975297ms: waiting for domain to come up
	I0127 02:44:48.104285  936523 main.go:141] libmachine: (test-preload-445920) DBG | domain test-preload-445920 has defined MAC address 52:54:00:36:d0:ff in network mk-test-preload-445920
	I0127 02:44:48.104623  936523 main.go:141] libmachine: (test-preload-445920) DBG | unable to find current IP address of domain test-preload-445920 in network mk-test-preload-445920
	I0127 02:44:48.104645  936523 main.go:141] libmachine: (test-preload-445920) DBG | I0127 02:44:48.104602  936606 retry.go:31] will retry after 1.097016662s: waiting for domain to come up
	I0127 02:44:49.203645  936523 main.go:141] libmachine: (test-preload-445920) DBG | domain test-preload-445920 has defined MAC address 52:54:00:36:d0:ff in network mk-test-preload-445920
	I0127 02:44:49.204055  936523 main.go:141] libmachine: (test-preload-445920) DBG | unable to find current IP address of domain test-preload-445920 in network mk-test-preload-445920
	I0127 02:44:49.204082  936523 main.go:141] libmachine: (test-preload-445920) DBG | I0127 02:44:49.204034  936606 retry.go:31] will retry after 1.432803821s: waiting for domain to come up
	I0127 02:44:50.638718  936523 main.go:141] libmachine: (test-preload-445920) DBG | domain test-preload-445920 has defined MAC address 52:54:00:36:d0:ff in network mk-test-preload-445920
	I0127 02:44:50.639200  936523 main.go:141] libmachine: (test-preload-445920) DBG | unable to find current IP address of domain test-preload-445920 in network mk-test-preload-445920
	I0127 02:44:50.639223  936523 main.go:141] libmachine: (test-preload-445920) DBG | I0127 02:44:50.639156  936606 retry.go:31] will retry after 1.121389949s: waiting for domain to come up
	I0127 02:44:51.762469  936523 main.go:141] libmachine: (test-preload-445920) DBG | domain test-preload-445920 has defined MAC address 52:54:00:36:d0:ff in network mk-test-preload-445920
	I0127 02:44:51.762975  936523 main.go:141] libmachine: (test-preload-445920) DBG | unable to find current IP address of domain test-preload-445920 in network mk-test-preload-445920
	I0127 02:44:51.763007  936523 main.go:141] libmachine: (test-preload-445920) DBG | I0127 02:44:51.762946  936606 retry.go:31] will retry after 2.248467772s: waiting for domain to come up
	I0127 02:44:54.014326  936523 main.go:141] libmachine: (test-preload-445920) DBG | domain test-preload-445920 has defined MAC address 52:54:00:36:d0:ff in network mk-test-preload-445920
	I0127 02:44:54.014701  936523 main.go:141] libmachine: (test-preload-445920) DBG | unable to find current IP address of domain test-preload-445920 in network mk-test-preload-445920
	I0127 02:44:54.014732  936523 main.go:141] libmachine: (test-preload-445920) DBG | I0127 02:44:54.014654  936606 retry.go:31] will retry after 2.057497481s: waiting for domain to come up
	I0127 02:44:56.074298  936523 main.go:141] libmachine: (test-preload-445920) DBG | domain test-preload-445920 has defined MAC address 52:54:00:36:d0:ff in network mk-test-preload-445920
	I0127 02:44:56.074576  936523 main.go:141] libmachine: (test-preload-445920) DBG | unable to find current IP address of domain test-preload-445920 in network mk-test-preload-445920
	I0127 02:44:56.074641  936523 main.go:141] libmachine: (test-preload-445920) DBG | I0127 02:44:56.074584  936606 retry.go:31] will retry after 2.789535313s: waiting for domain to come up
	I0127 02:44:58.867624  936523 main.go:141] libmachine: (test-preload-445920) DBG | domain test-preload-445920 has defined MAC address 52:54:00:36:d0:ff in network mk-test-preload-445920
	I0127 02:44:58.868026  936523 main.go:141] libmachine: (test-preload-445920) DBG | unable to find current IP address of domain test-preload-445920 in network mk-test-preload-445920
	I0127 02:44:58.868051  936523 main.go:141] libmachine: (test-preload-445920) DBG | I0127 02:44:58.868011  936606 retry.go:31] will retry after 3.93089998s: waiting for domain to come up
	I0127 02:45:02.803553  936523 main.go:141] libmachine: (test-preload-445920) DBG | domain test-preload-445920 has defined MAC address 52:54:00:36:d0:ff in network mk-test-preload-445920
	I0127 02:45:02.804234  936523 main.go:141] libmachine: (test-preload-445920) DBG | domain test-preload-445920 has current primary IP address 192.168.39.65 and MAC address 52:54:00:36:d0:ff in network mk-test-preload-445920
	I0127 02:45:02.804265  936523 main.go:141] libmachine: (test-preload-445920) found domain IP: 192.168.39.65
	I0127 02:45:02.804276  936523 main.go:141] libmachine: (test-preload-445920) reserving static IP address...
	I0127 02:45:02.804745  936523 main.go:141] libmachine: (test-preload-445920) DBG | found host DHCP lease matching {name: "test-preload-445920", mac: "52:54:00:36:d0:ff", ip: "192.168.39.65"} in network mk-test-preload-445920: {Iface:virbr1 ExpiryTime:2025-01-27 03:44:55 +0000 UTC Type:0 Mac:52:54:00:36:d0:ff Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:test-preload-445920 Clientid:01:52:54:00:36:d0:ff}
	I0127 02:45:02.804783  936523 main.go:141] libmachine: (test-preload-445920) DBG | skip adding static IP to network mk-test-preload-445920 - found existing host DHCP lease matching {name: "test-preload-445920", mac: "52:54:00:36:d0:ff", ip: "192.168.39.65"}
	I0127 02:45:02.804803  936523 main.go:141] libmachine: (test-preload-445920) reserved static IP address 192.168.39.65 for domain test-preload-445920
	I0127 02:45:02.804819  936523 main.go:141] libmachine: (test-preload-445920) waiting for SSH...
	I0127 02:45:02.804834  936523 main.go:141] libmachine: (test-preload-445920) DBG | Getting to WaitForSSH function...
	I0127 02:45:02.807059  936523 main.go:141] libmachine: (test-preload-445920) DBG | domain test-preload-445920 has defined MAC address 52:54:00:36:d0:ff in network mk-test-preload-445920
	I0127 02:45:02.807409  936523 main.go:141] libmachine: (test-preload-445920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d0:ff", ip: ""} in network mk-test-preload-445920: {Iface:virbr1 ExpiryTime:2025-01-27 03:44:55 +0000 UTC Type:0 Mac:52:54:00:36:d0:ff Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:test-preload-445920 Clientid:01:52:54:00:36:d0:ff}
	I0127 02:45:02.807441  936523 main.go:141] libmachine: (test-preload-445920) DBG | domain test-preload-445920 has defined IP address 192.168.39.65 and MAC address 52:54:00:36:d0:ff in network mk-test-preload-445920
	I0127 02:45:02.807550  936523 main.go:141] libmachine: (test-preload-445920) DBG | Using SSH client type: external
	I0127 02:45:02.807568  936523 main.go:141] libmachine: (test-preload-445920) DBG | Using SSH private key: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/test-preload-445920/id_rsa (-rw-------)
	I0127 02:45:02.807601  936523 main.go:141] libmachine: (test-preload-445920) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.65 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20316-897624/.minikube/machines/test-preload-445920/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 02:45:02.807615  936523 main.go:141] libmachine: (test-preload-445920) DBG | About to run SSH command:
	I0127 02:45:02.807628  936523 main.go:141] libmachine: (test-preload-445920) DBG | exit 0
	I0127 02:45:02.932722  936523 main.go:141] libmachine: (test-preload-445920) DBG | SSH cmd err, output: <nil>: 
	I0127 02:45:02.933094  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetConfigRaw
	I0127 02:45:02.933745  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetIP
	I0127 02:45:02.936563  936523 main.go:141] libmachine: (test-preload-445920) DBG | domain test-preload-445920 has defined MAC address 52:54:00:36:d0:ff in network mk-test-preload-445920
	I0127 02:45:02.936992  936523 main.go:141] libmachine: (test-preload-445920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d0:ff", ip: ""} in network mk-test-preload-445920: {Iface:virbr1 ExpiryTime:2025-01-27 03:44:55 +0000 UTC Type:0 Mac:52:54:00:36:d0:ff Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:test-preload-445920 Clientid:01:52:54:00:36:d0:ff}
	I0127 02:45:02.937033  936523 main.go:141] libmachine: (test-preload-445920) DBG | domain test-preload-445920 has defined IP address 192.168.39.65 and MAC address 52:54:00:36:d0:ff in network mk-test-preload-445920
	I0127 02:45:02.937279  936523 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/test-preload-445920/config.json ...
	I0127 02:45:02.937472  936523 machine.go:93] provisionDockerMachine start ...
	I0127 02:45:02.937491  936523 main.go:141] libmachine: (test-preload-445920) Calling .DriverName
	I0127 02:45:02.937726  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetSSHHostname
	I0127 02:45:02.940098  936523 main.go:141] libmachine: (test-preload-445920) DBG | domain test-preload-445920 has defined MAC address 52:54:00:36:d0:ff in network mk-test-preload-445920
	I0127 02:45:02.940410  936523 main.go:141] libmachine: (test-preload-445920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d0:ff", ip: ""} in network mk-test-preload-445920: {Iface:virbr1 ExpiryTime:2025-01-27 03:44:55 +0000 UTC Type:0 Mac:52:54:00:36:d0:ff Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:test-preload-445920 Clientid:01:52:54:00:36:d0:ff}
	I0127 02:45:02.940429  936523 main.go:141] libmachine: (test-preload-445920) DBG | domain test-preload-445920 has defined IP address 192.168.39.65 and MAC address 52:54:00:36:d0:ff in network mk-test-preload-445920
	I0127 02:45:02.940549  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetSSHPort
	I0127 02:45:02.940739  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetSSHKeyPath
	I0127 02:45:02.940917  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetSSHKeyPath
	I0127 02:45:02.941076  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetSSHUsername
	I0127 02:45:02.941254  936523 main.go:141] libmachine: Using SSH client type: native
	I0127 02:45:02.941452  936523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0127 02:45:02.941463  936523 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 02:45:03.044955  936523 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 02:45:03.044997  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetMachineName
	I0127 02:45:03.045284  936523 buildroot.go:166] provisioning hostname "test-preload-445920"
	I0127 02:45:03.045312  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetMachineName
	I0127 02:45:03.045494  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetSSHHostname
	I0127 02:45:03.048208  936523 main.go:141] libmachine: (test-preload-445920) DBG | domain test-preload-445920 has defined MAC address 52:54:00:36:d0:ff in network mk-test-preload-445920
	I0127 02:45:03.048570  936523 main.go:141] libmachine: (test-preload-445920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d0:ff", ip: ""} in network mk-test-preload-445920: {Iface:virbr1 ExpiryTime:2025-01-27 03:44:55 +0000 UTC Type:0 Mac:52:54:00:36:d0:ff Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:test-preload-445920 Clientid:01:52:54:00:36:d0:ff}
	I0127 02:45:03.048605  936523 main.go:141] libmachine: (test-preload-445920) DBG | domain test-preload-445920 has defined IP address 192.168.39.65 and MAC address 52:54:00:36:d0:ff in network mk-test-preload-445920
	I0127 02:45:03.048696  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetSSHPort
	I0127 02:45:03.048880  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetSSHKeyPath
	I0127 02:45:03.049069  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetSSHKeyPath
	I0127 02:45:03.049191  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetSSHUsername
	I0127 02:45:03.049348  936523 main.go:141] libmachine: Using SSH client type: native
	I0127 02:45:03.049522  936523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0127 02:45:03.049533  936523 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-445920 && echo "test-preload-445920" | sudo tee /etc/hostname
	I0127 02:45:03.166630  936523 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-445920
	
	I0127 02:45:03.166664  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetSSHHostname
	I0127 02:45:03.169381  936523 main.go:141] libmachine: (test-preload-445920) DBG | domain test-preload-445920 has defined MAC address 52:54:00:36:d0:ff in network mk-test-preload-445920
	I0127 02:45:03.169774  936523 main.go:141] libmachine: (test-preload-445920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d0:ff", ip: ""} in network mk-test-preload-445920: {Iface:virbr1 ExpiryTime:2025-01-27 03:44:55 +0000 UTC Type:0 Mac:52:54:00:36:d0:ff Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:test-preload-445920 Clientid:01:52:54:00:36:d0:ff}
	I0127 02:45:03.169808  936523 main.go:141] libmachine: (test-preload-445920) DBG | domain test-preload-445920 has defined IP address 192.168.39.65 and MAC address 52:54:00:36:d0:ff in network mk-test-preload-445920
	I0127 02:45:03.169985  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetSSHPort
	I0127 02:45:03.170183  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetSSHKeyPath
	I0127 02:45:03.170339  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetSSHKeyPath
	I0127 02:45:03.170476  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetSSHUsername
	I0127 02:45:03.170658  936523 main.go:141] libmachine: Using SSH client type: native
	I0127 02:45:03.170872  936523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0127 02:45:03.170896  936523 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-445920' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-445920/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-445920' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 02:45:03.281426  936523 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 02:45:03.281471  936523 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20316-897624/.minikube CaCertPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20316-897624/.minikube}
	I0127 02:45:03.281538  936523 buildroot.go:174] setting up certificates
	I0127 02:45:03.281554  936523 provision.go:84] configureAuth start
	I0127 02:45:03.281571  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetMachineName
	I0127 02:45:03.281890  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetIP
	I0127 02:45:03.284489  936523 main.go:141] libmachine: (test-preload-445920) DBG | domain test-preload-445920 has defined MAC address 52:54:00:36:d0:ff in network mk-test-preload-445920
	I0127 02:45:03.284794  936523 main.go:141] libmachine: (test-preload-445920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d0:ff", ip: ""} in network mk-test-preload-445920: {Iface:virbr1 ExpiryTime:2025-01-27 03:44:55 +0000 UTC Type:0 Mac:52:54:00:36:d0:ff Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:test-preload-445920 Clientid:01:52:54:00:36:d0:ff}
	I0127 02:45:03.284830  936523 main.go:141] libmachine: (test-preload-445920) DBG | domain test-preload-445920 has defined IP address 192.168.39.65 and MAC address 52:54:00:36:d0:ff in network mk-test-preload-445920
	I0127 02:45:03.285034  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetSSHHostname
	I0127 02:45:03.287005  936523 main.go:141] libmachine: (test-preload-445920) DBG | domain test-preload-445920 has defined MAC address 52:54:00:36:d0:ff in network mk-test-preload-445920
	I0127 02:45:03.287364  936523 main.go:141] libmachine: (test-preload-445920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d0:ff", ip: ""} in network mk-test-preload-445920: {Iface:virbr1 ExpiryTime:2025-01-27 03:44:55 +0000 UTC Type:0 Mac:52:54:00:36:d0:ff Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:test-preload-445920 Clientid:01:52:54:00:36:d0:ff}
	I0127 02:45:03.287395  936523 main.go:141] libmachine: (test-preload-445920) DBG | domain test-preload-445920 has defined IP address 192.168.39.65 and MAC address 52:54:00:36:d0:ff in network mk-test-preload-445920
	I0127 02:45:03.287479  936523 provision.go:143] copyHostCerts
	I0127 02:45:03.287549  936523 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-897624/.minikube/ca.pem, removing ...
	I0127 02:45:03.287569  936523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-897624/.minikube/ca.pem
	I0127 02:45:03.287637  936523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20316-897624/.minikube/ca.pem (1078 bytes)
	I0127 02:45:03.287739  936523 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-897624/.minikube/cert.pem, removing ...
	I0127 02:45:03.287748  936523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-897624/.minikube/cert.pem
	I0127 02:45:03.287774  936523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20316-897624/.minikube/cert.pem (1123 bytes)
	I0127 02:45:03.287842  936523 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-897624/.minikube/key.pem, removing ...
	I0127 02:45:03.287850  936523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-897624/.minikube/key.pem
	I0127 02:45:03.287875  936523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20316-897624/.minikube/key.pem (1679 bytes)
	I0127 02:45:03.287940  936523 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca-key.pem org=jenkins.test-preload-445920 san=[127.0.0.1 192.168.39.65 localhost minikube test-preload-445920]
	I0127 02:45:03.478415  936523 provision.go:177] copyRemoteCerts
	I0127 02:45:03.478479  936523 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 02:45:03.478508  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetSSHHostname
	I0127 02:45:03.481356  936523 main.go:141] libmachine: (test-preload-445920) DBG | domain test-preload-445920 has defined MAC address 52:54:00:36:d0:ff in network mk-test-preload-445920
	I0127 02:45:03.481683  936523 main.go:141] libmachine: (test-preload-445920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d0:ff", ip: ""} in network mk-test-preload-445920: {Iface:virbr1 ExpiryTime:2025-01-27 03:44:55 +0000 UTC Type:0 Mac:52:54:00:36:d0:ff Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:test-preload-445920 Clientid:01:52:54:00:36:d0:ff}
	I0127 02:45:03.481714  936523 main.go:141] libmachine: (test-preload-445920) DBG | domain test-preload-445920 has defined IP address 192.168.39.65 and MAC address 52:54:00:36:d0:ff in network mk-test-preload-445920
	I0127 02:45:03.481903  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetSSHPort
	I0127 02:45:03.482144  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetSSHKeyPath
	I0127 02:45:03.482327  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetSSHUsername
	I0127 02:45:03.482476  936523 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/test-preload-445920/id_rsa Username:docker}
	I0127 02:45:03.562669  936523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 02:45:03.584876  936523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0127 02:45:03.606630  936523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 02:45:03.628487  936523 provision.go:87] duration metric: took 346.916525ms to configureAuth
	I0127 02:45:03.628516  936523 buildroot.go:189] setting minikube options for container-runtime
	I0127 02:45:03.628697  936523 config.go:182] Loaded profile config "test-preload-445920": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0127 02:45:03.628774  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetSSHHostname
	I0127 02:45:03.631277  936523 main.go:141] libmachine: (test-preload-445920) DBG | domain test-preload-445920 has defined MAC address 52:54:00:36:d0:ff in network mk-test-preload-445920
	I0127 02:45:03.631646  936523 main.go:141] libmachine: (test-preload-445920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d0:ff", ip: ""} in network mk-test-preload-445920: {Iface:virbr1 ExpiryTime:2025-01-27 03:44:55 +0000 UTC Type:0 Mac:52:54:00:36:d0:ff Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:test-preload-445920 Clientid:01:52:54:00:36:d0:ff}
	I0127 02:45:03.631691  936523 main.go:141] libmachine: (test-preload-445920) DBG | domain test-preload-445920 has defined IP address 192.168.39.65 and MAC address 52:54:00:36:d0:ff in network mk-test-preload-445920
	I0127 02:45:03.631841  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetSSHPort
	I0127 02:45:03.632025  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetSSHKeyPath
	I0127 02:45:03.632184  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetSSHKeyPath
	I0127 02:45:03.632316  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetSSHUsername
	I0127 02:45:03.632444  936523 main.go:141] libmachine: Using SSH client type: native
	I0127 02:45:03.632660  936523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0127 02:45:03.632678  936523 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 02:45:03.851987  936523 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 02:45:03.852014  936523 machine.go:96] duration metric: took 914.529832ms to provisionDockerMachine
	I0127 02:45:03.852028  936523 start.go:293] postStartSetup for "test-preload-445920" (driver="kvm2")
	I0127 02:45:03.852039  936523 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 02:45:03.852054  936523 main.go:141] libmachine: (test-preload-445920) Calling .DriverName
	I0127 02:45:03.852430  936523 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 02:45:03.852486  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetSSHHostname
	I0127 02:45:03.855134  936523 main.go:141] libmachine: (test-preload-445920) DBG | domain test-preload-445920 has defined MAC address 52:54:00:36:d0:ff in network mk-test-preload-445920
	I0127 02:45:03.855527  936523 main.go:141] libmachine: (test-preload-445920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d0:ff", ip: ""} in network mk-test-preload-445920: {Iface:virbr1 ExpiryTime:2025-01-27 03:44:55 +0000 UTC Type:0 Mac:52:54:00:36:d0:ff Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:test-preload-445920 Clientid:01:52:54:00:36:d0:ff}
	I0127 02:45:03.855559  936523 main.go:141] libmachine: (test-preload-445920) DBG | domain test-preload-445920 has defined IP address 192.168.39.65 and MAC address 52:54:00:36:d0:ff in network mk-test-preload-445920
	I0127 02:45:03.855705  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetSSHPort
	I0127 02:45:03.855907  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetSSHKeyPath
	I0127 02:45:03.856073  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetSSHUsername
	I0127 02:45:03.856192  936523 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/test-preload-445920/id_rsa Username:docker}
	I0127 02:45:03.939051  936523 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 02:45:03.943050  936523 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 02:45:03.943081  936523 filesync.go:126] Scanning /home/jenkins/minikube-integration/20316-897624/.minikube/addons for local assets ...
	I0127 02:45:03.943155  936523 filesync.go:126] Scanning /home/jenkins/minikube-integration/20316-897624/.minikube/files for local assets ...
	I0127 02:45:03.943248  936523 filesync.go:149] local asset: /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem -> 9048892.pem in /etc/ssl/certs
	I0127 02:45:03.943401  936523 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 02:45:03.952298  936523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem --> /etc/ssl/certs/9048892.pem (1708 bytes)
	I0127 02:45:03.974619  936523 start.go:296] duration metric: took 122.572994ms for postStartSetup
	I0127 02:45:03.974665  936523 fix.go:56] duration metric: took 19.848549092s for fixHost
	I0127 02:45:03.974690  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetSSHHostname
	I0127 02:45:03.977371  936523 main.go:141] libmachine: (test-preload-445920) DBG | domain test-preload-445920 has defined MAC address 52:54:00:36:d0:ff in network mk-test-preload-445920
	I0127 02:45:03.977722  936523 main.go:141] libmachine: (test-preload-445920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d0:ff", ip: ""} in network mk-test-preload-445920: {Iface:virbr1 ExpiryTime:2025-01-27 03:44:55 +0000 UTC Type:0 Mac:52:54:00:36:d0:ff Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:test-preload-445920 Clientid:01:52:54:00:36:d0:ff}
	I0127 02:45:03.977750  936523 main.go:141] libmachine: (test-preload-445920) DBG | domain test-preload-445920 has defined IP address 192.168.39.65 and MAC address 52:54:00:36:d0:ff in network mk-test-preload-445920
	I0127 02:45:03.977909  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetSSHPort
	I0127 02:45:03.978110  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetSSHKeyPath
	I0127 02:45:03.978317  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetSSHKeyPath
	I0127 02:45:03.978478  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetSSHUsername
	I0127 02:45:03.978619  936523 main.go:141] libmachine: Using SSH client type: native
	I0127 02:45:03.978827  936523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0127 02:45:03.978843  936523 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 02:45:04.081373  936523 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737945904.055387932
	
	I0127 02:45:04.081418  936523 fix.go:216] guest clock: 1737945904.055387932
	I0127 02:45:04.081426  936523 fix.go:229] Guest: 2025-01-27 02:45:04.055387932 +0000 UTC Remote: 2025-01-27 02:45:03.974669424 +0000 UTC m=+33.080747543 (delta=80.718508ms)
	I0127 02:45:04.081463  936523 fix.go:200] guest clock delta is within tolerance: 80.718508ms
	I0127 02:45:04.081468  936523 start.go:83] releasing machines lock for "test-preload-445920", held for 19.955366862s
	I0127 02:45:04.081491  936523 main.go:141] libmachine: (test-preload-445920) Calling .DriverName
	I0127 02:45:04.081712  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetIP
	I0127 02:45:04.084247  936523 main.go:141] libmachine: (test-preload-445920) DBG | domain test-preload-445920 has defined MAC address 52:54:00:36:d0:ff in network mk-test-preload-445920
	I0127 02:45:04.084535  936523 main.go:141] libmachine: (test-preload-445920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d0:ff", ip: ""} in network mk-test-preload-445920: {Iface:virbr1 ExpiryTime:2025-01-27 03:44:55 +0000 UTC Type:0 Mac:52:54:00:36:d0:ff Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:test-preload-445920 Clientid:01:52:54:00:36:d0:ff}
	I0127 02:45:04.084568  936523 main.go:141] libmachine: (test-preload-445920) DBG | domain test-preload-445920 has defined IP address 192.168.39.65 and MAC address 52:54:00:36:d0:ff in network mk-test-preload-445920
	I0127 02:45:04.084750  936523 main.go:141] libmachine: (test-preload-445920) Calling .DriverName
	I0127 02:45:04.085224  936523 main.go:141] libmachine: (test-preload-445920) Calling .DriverName
	I0127 02:45:04.085367  936523 main.go:141] libmachine: (test-preload-445920) Calling .DriverName
	I0127 02:45:04.085460  936523 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 02:45:04.085506  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetSSHHostname
	I0127 02:45:04.085623  936523 ssh_runner.go:195] Run: cat /version.json
	I0127 02:45:04.085652  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetSSHHostname
	I0127 02:45:04.088156  936523 main.go:141] libmachine: (test-preload-445920) DBG | domain test-preload-445920 has defined MAC address 52:54:00:36:d0:ff in network mk-test-preload-445920
	I0127 02:45:04.088431  936523 main.go:141] libmachine: (test-preload-445920) DBG | domain test-preload-445920 has defined MAC address 52:54:00:36:d0:ff in network mk-test-preload-445920
	I0127 02:45:04.088469  936523 main.go:141] libmachine: (test-preload-445920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d0:ff", ip: ""} in network mk-test-preload-445920: {Iface:virbr1 ExpiryTime:2025-01-27 03:44:55 +0000 UTC Type:0 Mac:52:54:00:36:d0:ff Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:test-preload-445920 Clientid:01:52:54:00:36:d0:ff}
	I0127 02:45:04.088495  936523 main.go:141] libmachine: (test-preload-445920) DBG | domain test-preload-445920 has defined IP address 192.168.39.65 and MAC address 52:54:00:36:d0:ff in network mk-test-preload-445920
	I0127 02:45:04.088637  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetSSHPort
	I0127 02:45:04.088823  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetSSHKeyPath
	I0127 02:45:04.088840  936523 main.go:141] libmachine: (test-preload-445920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d0:ff", ip: ""} in network mk-test-preload-445920: {Iface:virbr1 ExpiryTime:2025-01-27 03:44:55 +0000 UTC Type:0 Mac:52:54:00:36:d0:ff Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:test-preload-445920 Clientid:01:52:54:00:36:d0:ff}
	I0127 02:45:04.088854  936523 main.go:141] libmachine: (test-preload-445920) DBG | domain test-preload-445920 has defined IP address 192.168.39.65 and MAC address 52:54:00:36:d0:ff in network mk-test-preload-445920
	I0127 02:45:04.088981  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetSSHUsername
	I0127 02:45:04.089064  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetSSHPort
	I0127 02:45:04.089128  936523 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/test-preload-445920/id_rsa Username:docker}
	I0127 02:45:04.089199  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetSSHKeyPath
	I0127 02:45:04.089327  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetSSHUsername
	I0127 02:45:04.089456  936523 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/test-preload-445920/id_rsa Username:docker}
	I0127 02:45:04.192357  936523 ssh_runner.go:195] Run: systemctl --version
	I0127 02:45:04.198008  936523 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 02:45:04.336415  936523 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 02:45:04.342878  936523 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 02:45:04.342956  936523 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 02:45:04.358567  936523 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 02:45:04.358599  936523 start.go:495] detecting cgroup driver to use...
	I0127 02:45:04.358666  936523 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 02:45:04.374772  936523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 02:45:04.387935  936523 docker.go:217] disabling cri-docker service (if available) ...
	I0127 02:45:04.387991  936523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 02:45:04.400474  936523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 02:45:04.413100  936523 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 02:45:04.534565  936523 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 02:45:04.679600  936523 docker.go:233] disabling docker service ...
	I0127 02:45:04.679673  936523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 02:45:04.693313  936523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 02:45:04.705728  936523 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 02:45:04.838771  936523 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 02:45:04.961266  936523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 02:45:04.974467  936523 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 02:45:04.991773  936523 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0127 02:45:04.991837  936523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 02:45:05.001399  936523 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 02:45:05.001466  936523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 02:45:05.010814  936523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 02:45:05.020305  936523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 02:45:05.029710  936523 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 02:45:05.039382  936523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 02:45:05.048633  936523 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 02:45:05.064948  936523 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 02:45:05.074855  936523 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 02:45:05.084117  936523 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 02:45:05.084192  936523 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 02:45:05.095891  936523 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 02:45:05.105455  936523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 02:45:05.222786  936523 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 02:45:05.319451  936523 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 02:45:05.319541  936523 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 02:45:05.324301  936523 start.go:563] Will wait 60s for crictl version
	I0127 02:45:05.324365  936523 ssh_runner.go:195] Run: which crictl
	I0127 02:45:05.327869  936523 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 02:45:05.364578  936523 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 02:45:05.364654  936523 ssh_runner.go:195] Run: crio --version
	I0127 02:45:05.391266  936523 ssh_runner.go:195] Run: crio --version
	I0127 02:45:05.419283  936523 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0127 02:45:05.420747  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetIP
	I0127 02:45:05.423536  936523 main.go:141] libmachine: (test-preload-445920) DBG | domain test-preload-445920 has defined MAC address 52:54:00:36:d0:ff in network mk-test-preload-445920
	I0127 02:45:05.423810  936523 main.go:141] libmachine: (test-preload-445920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d0:ff", ip: ""} in network mk-test-preload-445920: {Iface:virbr1 ExpiryTime:2025-01-27 03:44:55 +0000 UTC Type:0 Mac:52:54:00:36:d0:ff Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:test-preload-445920 Clientid:01:52:54:00:36:d0:ff}
	I0127 02:45:05.423846  936523 main.go:141] libmachine: (test-preload-445920) DBG | domain test-preload-445920 has defined IP address 192.168.39.65 and MAC address 52:54:00:36:d0:ff in network mk-test-preload-445920
	I0127 02:45:05.424066  936523 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0127 02:45:05.428038  936523 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 02:45:05.439853  936523 kubeadm.go:883] updating cluster {Name:test-preload-445920 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-445920 Namespace:defa
ult APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 02:45:05.440009  936523 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0127 02:45:05.440062  936523 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 02:45:05.474447  936523 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0127 02:45:05.474516  936523 ssh_runner.go:195] Run: which lz4
	I0127 02:45:05.478360  936523 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 02:45:05.482153  936523 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 02:45:05.482186  936523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0127 02:45:06.902051  936523 crio.go:462] duration metric: took 1.423724339s to copy over tarball
	I0127 02:45:06.902140  936523 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 02:45:09.302647  936523 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.400468494s)
	I0127 02:45:09.302685  936523 crio.go:469] duration metric: took 2.400598451s to extract the tarball
	I0127 02:45:09.302693  936523 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 02:45:09.343272  936523 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 02:45:09.385257  936523 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0127 02:45:09.385285  936523 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0127 02:45:09.385358  936523 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 02:45:09.385381  936523 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0127 02:45:09.385404  936523 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0127 02:45:09.385379  936523 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0127 02:45:09.385466  936523 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0127 02:45:09.385505  936523 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0127 02:45:09.385446  936523 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 02:45:09.385739  936523 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0127 02:45:09.386966  936523 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0127 02:45:09.386977  936523 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0127 02:45:09.386997  936523 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0127 02:45:09.387006  936523 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0127 02:45:09.387041  936523 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 02:45:09.386966  936523 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0127 02:45:09.387131  936523 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 02:45:09.386967  936523 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0127 02:45:09.605577  936523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0127 02:45:09.618462  936523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 02:45:09.624021  936523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0127 02:45:09.636979  936523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0127 02:45:09.645020  936523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0127 02:45:09.649547  936523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0127 02:45:09.660001  936523 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0127 02:45:09.660125  936523 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0127 02:45:09.660182  936523 ssh_runner.go:195] Run: which crictl
	I0127 02:45:09.706415  936523 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0127 02:45:09.706461  936523 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 02:45:09.706504  936523 ssh_runner.go:195] Run: which crictl
	I0127 02:45:09.728102  936523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0127 02:45:09.744528  936523 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0127 02:45:09.744591  936523 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0127 02:45:09.744648  936523 ssh_runner.go:195] Run: which crictl
	I0127 02:45:09.762980  936523 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0127 02:45:09.763043  936523 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0127 02:45:09.763099  936523 ssh_runner.go:195] Run: which crictl
	I0127 02:45:09.768119  936523 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0127 02:45:09.768183  936523 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0127 02:45:09.768246  936523 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0127 02:45:09.768287  936523 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0127 02:45:09.768312  936523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0127 02:45:09.768323  936523 ssh_runner.go:195] Run: which crictl
	I0127 02:45:09.768255  936523 ssh_runner.go:195] Run: which crictl
	I0127 02:45:09.768323  936523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 02:45:09.814934  936523 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0127 02:45:09.814979  936523 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0127 02:45:09.815041  936523 ssh_runner.go:195] Run: which crictl
	I0127 02:45:09.815040  936523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0127 02:45:09.815123  936523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0127 02:45:09.815157  936523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0127 02:45:09.839424  936523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 02:45:09.855432  936523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0127 02:45:09.856795  936523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0127 02:45:09.930854  936523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0127 02:45:09.930913  936523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0127 02:45:09.930993  936523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0127 02:45:09.938556  936523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0127 02:45:09.959715  936523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 02:45:09.976310  936523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0127 02:45:09.981289  936523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0127 02:45:10.068722  936523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0127 02:45:10.068820  936523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0127 02:45:10.068857  936523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0127 02:45:10.070250  936523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0127 02:45:10.087493  936523 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0127 02:45:10.087613  936523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0127 02:45:10.131739  936523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0127 02:45:10.146130  936523 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0127 02:45:10.146248  936523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0127 02:45:10.184614  936523 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0127 02:45:10.184733  936523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0127 02:45:10.213841  936523 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0127 02:45:10.213965  936523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0127 02:45:10.214000  936523 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0127 02:45:10.213967  936523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0127 02:45:10.214064  936523 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0127 02:45:10.214074  936523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0127 02:45:10.214082  936523 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0127 02:45:10.214119  936523 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0127 02:45:10.230946  936523 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0127 02:45:10.230963  936523 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0127 02:45:10.231008  936523 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0127 02:45:10.231071  936523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0127 02:45:10.257300  936523 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0127 02:45:10.257359  936523 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0127 02:45:10.257397  936523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0127 02:45:10.584800  936523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 02:45:12.994830  936523 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4: (2.780677464s)
	I0127 02:45:12.994878  936523 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0127 02:45:12.994884  936523 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4: (2.780791007s)
	I0127 02:45:12.994906  936523 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0127 02:45:12.994910  936523 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0127 02:45:12.994963  936523 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: (2.763854369s)
	I0127 02:45:12.994996  936523 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0127 02:45:12.994974  936523 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0127 02:45:12.995024  936523 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: (2.737606663s)
	I0127 02:45:12.995053  936523 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0127 02:45:12.995096  936523 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.410262028s)
	I0127 02:45:13.136679  936523 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0127 02:45:13.136729  936523 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0127 02:45:13.136797  936523 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0127 02:45:13.574625  936523 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0127 02:45:13.574681  936523 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0127 02:45:13.574739  936523 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0127 02:45:14.418578  936523 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0127 02:45:14.418633  936523 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0127 02:45:14.418681  936523 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0127 02:45:15.165452  936523 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0127 02:45:15.165504  936523 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0127 02:45:15.165550  936523 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0127 02:45:17.212394  936523 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.04681983s)
	I0127 02:45:17.212427  936523 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0127 02:45:17.212473  936523 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0127 02:45:17.212578  936523 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0127 02:45:17.561142  936523 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0127 02:45:17.561203  936523 cache_images.go:123] Successfully loaded all cached images
	I0127 02:45:17.561212  936523 cache_images.go:92] duration metric: took 8.175913937s to LoadCachedImages
	I0127 02:45:17.561235  936523 kubeadm.go:934] updating node { 192.168.39.65 8443 v1.24.4 crio true true} ...
	I0127 02:45:17.561420  936523 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-445920 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.65
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-445920 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 02:45:17.561515  936523 ssh_runner.go:195] Run: crio config
	I0127 02:45:17.607211  936523 cni.go:84] Creating CNI manager for ""
	I0127 02:45:17.607237  936523 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 02:45:17.607250  936523 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 02:45:17.607273  936523 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.65 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-445920 NodeName:test-preload-445920 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.65"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.65 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 02:45:17.607470  936523 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.65
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-445920"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.65
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.65"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 02:45:17.607554  936523 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0127 02:45:17.616781  936523 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 02:45:17.616863  936523 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 02:45:17.625757  936523 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0127 02:45:17.641275  936523 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 02:45:17.656482  936523 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0127 02:45:17.672292  936523 ssh_runner.go:195] Run: grep 192.168.39.65	control-plane.minikube.internal$ /etc/hosts
	I0127 02:45:17.676080  936523 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.65	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 02:45:17.687449  936523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 02:45:17.810831  936523 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 02:45:17.827866  936523 certs.go:68] Setting up /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/test-preload-445920 for IP: 192.168.39.65
	I0127 02:45:17.827889  936523 certs.go:194] generating shared ca certs ...
	I0127 02:45:17.827907  936523 certs.go:226] acquiring lock for ca certs: {Name:mk2a0a86c4d196cb3cdcd1c836209954479044e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:45:17.828066  936523 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20316-897624/.minikube/ca.key
	I0127 02:45:17.828111  936523 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20316-897624/.minikube/proxy-client-ca.key
	I0127 02:45:17.828122  936523 certs.go:256] generating profile certs ...
	I0127 02:45:17.828213  936523 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/test-preload-445920/client.key
	I0127 02:45:17.828279  936523 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/test-preload-445920/apiserver.key.f4fe735f
	I0127 02:45:17.828313  936523 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/test-preload-445920/proxy-client.key
	I0127 02:45:17.828425  936523 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/904889.pem (1338 bytes)
	W0127 02:45:17.828457  936523 certs.go:480] ignoring /home/jenkins/minikube-integration/20316-897624/.minikube/certs/904889_empty.pem, impossibly tiny 0 bytes
	I0127 02:45:17.828472  936523 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 02:45:17.828498  936523 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem (1078 bytes)
	I0127 02:45:17.828524  936523 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/cert.pem (1123 bytes)
	I0127 02:45:17.828547  936523 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/key.pem (1679 bytes)
	I0127 02:45:17.828584  936523 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem (1708 bytes)
	I0127 02:45:17.829371  936523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 02:45:17.864478  936523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 02:45:17.902190  936523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 02:45:17.929569  936523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 02:45:17.957995  936523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/test-preload-445920/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0127 02:45:17.985801  936523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/test-preload-445920/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 02:45:18.013566  936523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/test-preload-445920/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 02:45:18.048766  936523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/test-preload-445920/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 02:45:18.072777  936523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 02:45:18.103917  936523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/certs/904889.pem --> /usr/share/ca-certificates/904889.pem (1338 bytes)
	I0127 02:45:18.127613  936523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem --> /usr/share/ca-certificates/9048892.pem (1708 bytes)
	I0127 02:45:18.150999  936523 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 02:45:18.168145  936523 ssh_runner.go:195] Run: openssl version
	I0127 02:45:18.174030  936523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 02:45:18.184550  936523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 02:45:18.189118  936523 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 01:48 /usr/share/ca-certificates/minikubeCA.pem
	I0127 02:45:18.189261  936523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 02:45:18.195013  936523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 02:45:18.205511  936523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/904889.pem && ln -fs /usr/share/ca-certificates/904889.pem /etc/ssl/certs/904889.pem"
	I0127 02:45:18.216106  936523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/904889.pem
	I0127 02:45:18.220672  936523 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 01:57 /usr/share/ca-certificates/904889.pem
	I0127 02:45:18.220750  936523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/904889.pem
	I0127 02:45:18.226329  936523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/904889.pem /etc/ssl/certs/51391683.0"
	I0127 02:45:18.236895  936523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9048892.pem && ln -fs /usr/share/ca-certificates/9048892.pem /etc/ssl/certs/9048892.pem"
	I0127 02:45:18.247638  936523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9048892.pem
	I0127 02:45:18.252020  936523 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 01:57 /usr/share/ca-certificates/9048892.pem
	I0127 02:45:18.252081  936523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9048892.pem
	I0127 02:45:18.257488  936523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9048892.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 02:45:18.267810  936523 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 02:45:18.272757  936523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 02:45:18.278970  936523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 02:45:18.284919  936523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 02:45:18.290950  936523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 02:45:18.296956  936523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 02:45:18.302683  936523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 02:45:18.308690  936523 kubeadm.go:392] StartCluster: {Name:test-preload-445920 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-445920 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 02:45:18.308805  936523 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 02:45:18.308864  936523 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 02:45:18.345798  936523 cri.go:89] found id: ""
	I0127 02:45:18.345874  936523 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 02:45:18.355934  936523 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 02:45:18.355962  936523 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 02:45:18.356022  936523 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 02:45:18.365450  936523 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 02:45:18.366002  936523 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-445920" does not appear in /home/jenkins/minikube-integration/20316-897624/kubeconfig
	I0127 02:45:18.366123  936523 kubeconfig.go:62] /home/jenkins/minikube-integration/20316-897624/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-445920" cluster setting kubeconfig missing "test-preload-445920" context setting]
	I0127 02:45:18.366404  936523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/kubeconfig: {Name:mkcd87672341028e989830590061bc5270733978 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:45:18.367081  936523 kapi.go:59] client config for test-preload-445920: &rest.Config{Host:"https://192.168.39.65:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20316-897624/.minikube/profiles/test-preload-445920/client.crt", KeyFile:"/home/jenkins/minikube-integration/20316-897624/.minikube/profiles/test-preload-445920/client.key", CAFile:"/home/jenkins/minikube-integration/20316-897624/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c3e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0127 02:45:18.367849  936523 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 02:45:18.377087  936523 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.65
	I0127 02:45:18.377127  936523 kubeadm.go:1160] stopping kube-system containers ...
	I0127 02:45:18.377140  936523 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0127 02:45:18.377186  936523 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 02:45:18.411159  936523 cri.go:89] found id: ""
	I0127 02:45:18.411260  936523 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 02:45:18.427474  936523 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 02:45:18.437122  936523 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 02:45:18.437151  936523 kubeadm.go:157] found existing configuration files:
	
	I0127 02:45:18.437211  936523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 02:45:18.446349  936523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 02:45:18.446424  936523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 02:45:18.456031  936523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 02:45:18.465490  936523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 02:45:18.465585  936523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 02:45:18.474487  936523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 02:45:18.483098  936523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 02:45:18.483176  936523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 02:45:18.492086  936523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 02:45:18.500512  936523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 02:45:18.500578  936523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 02:45:18.509989  936523 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 02:45:18.518944  936523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 02:45:18.614454  936523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 02:45:19.187994  936523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 02:45:19.437420  936523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 02:45:19.506566  936523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 02:45:19.583719  936523 api_server.go:52] waiting for apiserver process to appear ...
	I0127 02:45:19.583806  936523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 02:45:20.083944  936523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 02:45:20.584038  936523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 02:45:20.602281  936523 api_server.go:72] duration metric: took 1.018561301s to wait for apiserver process to appear ...
	I0127 02:45:20.602316  936523 api_server.go:88] waiting for apiserver healthz status ...
	I0127 02:45:20.602341  936523 api_server.go:253] Checking apiserver healthz at https://192.168.39.65:8443/healthz ...
	I0127 02:45:20.602883  936523 api_server.go:269] stopped: https://192.168.39.65:8443/healthz: Get "https://192.168.39.65:8443/healthz": dial tcp 192.168.39.65:8443: connect: connection refused
	I0127 02:45:21.102874  936523 api_server.go:253] Checking apiserver healthz at https://192.168.39.65:8443/healthz ...
	I0127 02:45:26.103856  936523 api_server.go:269] stopped: https://192.168.39.65:8443/healthz: Get "https://192.168.39.65:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0127 02:45:26.103941  936523 api_server.go:253] Checking apiserver healthz at https://192.168.39.65:8443/healthz ...
	I0127 02:45:31.104557  936523 api_server.go:269] stopped: https://192.168.39.65:8443/healthz: Get "https://192.168.39.65:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0127 02:45:31.104634  936523 api_server.go:253] Checking apiserver healthz at https://192.168.39.65:8443/healthz ...
	I0127 02:45:36.105702  936523 api_server.go:269] stopped: https://192.168.39.65:8443/healthz: Get "https://192.168.39.65:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0127 02:45:36.105761  936523 api_server.go:253] Checking apiserver healthz at https://192.168.39.65:8443/healthz ...
	I0127 02:45:41.106374  936523 api_server.go:269] stopped: https://192.168.39.65:8443/healthz: Get "https://192.168.39.65:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0127 02:45:41.106427  936523 api_server.go:253] Checking apiserver healthz at https://192.168.39.65:8443/healthz ...
	I0127 02:45:41.375806  936523 api_server.go:269] stopped: https://192.168.39.65:8443/healthz: Get "https://192.168.39.65:8443/healthz": read tcp 192.168.39.1:43950->192.168.39.65:8443: read: connection reset by peer
	I0127 02:45:41.603318  936523 api_server.go:253] Checking apiserver healthz at https://192.168.39.65:8443/healthz ...
	I0127 02:45:41.604069  936523 api_server.go:269] stopped: https://192.168.39.65:8443/healthz: Get "https://192.168.39.65:8443/healthz": dial tcp 192.168.39.65:8443: connect: connection refused
	I0127 02:45:42.102776  936523 api_server.go:253] Checking apiserver healthz at https://192.168.39.65:8443/healthz ...
	I0127 02:45:44.754852  936523 api_server.go:279] https://192.168.39.65:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 02:45:44.754908  936523 api_server.go:103] status: https://192.168.39.65:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 02:45:44.754929  936523 api_server.go:253] Checking apiserver healthz at https://192.168.39.65:8443/healthz ...
	I0127 02:45:44.798753  936523 api_server.go:279] https://192.168.39.65:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 02:45:44.798787  936523 api_server.go:103] status: https://192.168.39.65:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 02:45:45.103281  936523 api_server.go:253] Checking apiserver healthz at https://192.168.39.65:8443/healthz ...
	I0127 02:45:45.110461  936523 api_server.go:279] https://192.168.39.65:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 02:45:45.110502  936523 api_server.go:103] status: https://192.168.39.65:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 02:45:45.603265  936523 api_server.go:253] Checking apiserver healthz at https://192.168.39.65:8443/healthz ...
	I0127 02:45:45.608429  936523 api_server.go:279] https://192.168.39.65:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 02:45:45.608461  936523 api_server.go:103] status: https://192.168.39.65:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 02:45:46.103193  936523 api_server.go:253] Checking apiserver healthz at https://192.168.39.65:8443/healthz ...
	I0127 02:45:46.109152  936523 api_server.go:279] https://192.168.39.65:8443/healthz returned 200:
	ok
	I0127 02:45:46.116322  936523 api_server.go:141] control plane version: v1.24.4
	I0127 02:45:46.116356  936523 api_server.go:131] duration metric: took 25.514031752s to wait for apiserver health ...
	I0127 02:45:46.116366  936523 cni.go:84] Creating CNI manager for ""
	I0127 02:45:46.116373  936523 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 02:45:46.118094  936523 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 02:45:46.119346  936523 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 02:45:46.130622  936523 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 02:45:46.147966  936523 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 02:45:46.148061  936523 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0127 02:45:46.148079  936523 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0127 02:45:46.169013  936523 system_pods.go:59] 7 kube-system pods found
	I0127 02:45:46.169052  936523 system_pods.go:61] "coredns-6d4b75cb6d-7whl5" [f3148a51-8c4d-4e24-9300-0fd5af64287e] Running
	I0127 02:45:46.169060  936523 system_pods.go:61] "etcd-test-preload-445920" [8e416c9c-9e27-45ce-bd58-7339a9128234] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 02:45:46.169064  936523 system_pods.go:61] "kube-apiserver-test-preload-445920" [bdb6e937-f6f9-4c50-aef5-89383b556be7] Running
	I0127 02:45:46.169075  936523 system_pods.go:61] "kube-controller-manager-test-preload-445920" [024e73d0-66c8-45cc-9459-82f8a740b3bb] Running
	I0127 02:45:46.169079  936523 system_pods.go:61] "kube-proxy-98kzr" [70f1e5ae-2c02-4a03-a74f-465d68e132bc] Running
	I0127 02:45:46.169082  936523 system_pods.go:61] "kube-scheduler-test-preload-445920" [f7fbf7be-65df-4b6b-abcb-5d485a77dc84] Running
	I0127 02:45:46.169086  936523 system_pods.go:61] "storage-provisioner" [683e7ae4-acb3-4781-967e-c6dbd794f159] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0127 02:45:46.169093  936523 system_pods.go:74] duration metric: took 21.103102ms to wait for pod list to return data ...
	I0127 02:45:46.169103  936523 node_conditions.go:102] verifying NodePressure condition ...
	I0127 02:45:46.177891  936523 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 02:45:46.177928  936523 node_conditions.go:123] node cpu capacity is 2
	I0127 02:45:46.177941  936523 node_conditions.go:105] duration metric: took 8.833329ms to run NodePressure ...
	I0127 02:45:46.177966  936523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 02:45:46.410817  936523 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0127 02:45:46.417362  936523 kubeadm.go:739] kubelet initialised
	I0127 02:45:46.417385  936523 kubeadm.go:740] duration metric: took 6.539693ms waiting for restarted kubelet to initialise ...
	I0127 02:45:46.417394  936523 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 02:45:46.433338  936523 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-7whl5" in "kube-system" namespace to be "Ready" ...
	I0127 02:45:46.440346  936523 pod_ready.go:98] node "test-preload-445920" hosting pod "coredns-6d4b75cb6d-7whl5" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-445920" has status "Ready":"False"
	I0127 02:45:46.440379  936523 pod_ready.go:82] duration metric: took 7.013372ms for pod "coredns-6d4b75cb6d-7whl5" in "kube-system" namespace to be "Ready" ...
	E0127 02:45:46.440392  936523 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-445920" hosting pod "coredns-6d4b75cb6d-7whl5" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-445920" has status "Ready":"False"
	I0127 02:45:46.440402  936523 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-445920" in "kube-system" namespace to be "Ready" ...
	I0127 02:45:46.445830  936523 pod_ready.go:98] node "test-preload-445920" hosting pod "etcd-test-preload-445920" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-445920" has status "Ready":"False"
	I0127 02:45:46.445852  936523 pod_ready.go:82] duration metric: took 5.441732ms for pod "etcd-test-preload-445920" in "kube-system" namespace to be "Ready" ...
	E0127 02:45:46.445861  936523 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-445920" hosting pod "etcd-test-preload-445920" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-445920" has status "Ready":"False"
	I0127 02:45:46.445867  936523 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-445920" in "kube-system" namespace to be "Ready" ...
	I0127 02:45:46.452463  936523 pod_ready.go:98] node "test-preload-445920" hosting pod "kube-apiserver-test-preload-445920" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-445920" has status "Ready":"False"
	I0127 02:45:46.452487  936523 pod_ready.go:82] duration metric: took 6.611595ms for pod "kube-apiserver-test-preload-445920" in "kube-system" namespace to be "Ready" ...
	E0127 02:45:46.452495  936523 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-445920" hosting pod "kube-apiserver-test-preload-445920" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-445920" has status "Ready":"False"
	I0127 02:45:46.452501  936523 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-445920" in "kube-system" namespace to be "Ready" ...
	I0127 02:45:46.554738  936523 pod_ready.go:98] node "test-preload-445920" hosting pod "kube-controller-manager-test-preload-445920" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-445920" has status "Ready":"False"
	I0127 02:45:46.554774  936523 pod_ready.go:82] duration metric: took 102.26378ms for pod "kube-controller-manager-test-preload-445920" in "kube-system" namespace to be "Ready" ...
	E0127 02:45:46.554788  936523 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-445920" hosting pod "kube-controller-manager-test-preload-445920" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-445920" has status "Ready":"False"
	I0127 02:45:46.554797  936523 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-98kzr" in "kube-system" namespace to be "Ready" ...
	I0127 02:45:46.963017  936523 pod_ready.go:98] node "test-preload-445920" hosting pod "kube-proxy-98kzr" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-445920" has status "Ready":"False"
	I0127 02:45:46.963051  936523 pod_ready.go:82] duration metric: took 408.238944ms for pod "kube-proxy-98kzr" in "kube-system" namespace to be "Ready" ...
	E0127 02:45:46.963065  936523 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-445920" hosting pod "kube-proxy-98kzr" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-445920" has status "Ready":"False"
	I0127 02:45:46.963074  936523 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-445920" in "kube-system" namespace to be "Ready" ...
	I0127 02:45:47.351617  936523 pod_ready.go:98] node "test-preload-445920" hosting pod "kube-scheduler-test-preload-445920" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-445920" has status "Ready":"False"
	I0127 02:45:47.351652  936523 pod_ready.go:82] duration metric: took 388.569977ms for pod "kube-scheduler-test-preload-445920" in "kube-system" namespace to be "Ready" ...
	E0127 02:45:47.351664  936523 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-445920" hosting pod "kube-scheduler-test-preload-445920" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-445920" has status "Ready":"False"
	I0127 02:45:47.351674  936523 pod_ready.go:39] duration metric: took 934.270326ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 02:45:47.351700  936523 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 02:45:47.366809  936523 ops.go:34] apiserver oom_adj: -16
	I0127 02:45:47.366832  936523 kubeadm.go:597] duration metric: took 29.010864143s to restartPrimaryControlPlane
	I0127 02:45:47.366843  936523 kubeadm.go:394] duration metric: took 29.058164568s to StartCluster
	I0127 02:45:47.366866  936523 settings.go:142] acquiring lock: {Name:mk329e04da29656a59534015ea2d4cccfe5debac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:45:47.366940  936523 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20316-897624/kubeconfig
	I0127 02:45:47.367566  936523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/kubeconfig: {Name:mkcd87672341028e989830590061bc5270733978 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:45:47.367770  936523 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 02:45:47.367872  936523 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 02:45:47.367993  936523 addons.go:69] Setting storage-provisioner=true in profile "test-preload-445920"
	I0127 02:45:47.368037  936523 addons.go:238] Setting addon storage-provisioner=true in "test-preload-445920"
	W0127 02:45:47.368049  936523 addons.go:247] addon storage-provisioner should already be in state true
	I0127 02:45:47.368078  936523 config.go:182] Loaded profile config "test-preload-445920": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0127 02:45:47.368088  936523 host.go:66] Checking if "test-preload-445920" exists ...
	I0127 02:45:47.368082  936523 addons.go:69] Setting default-storageclass=true in profile "test-preload-445920"
	I0127 02:45:47.368144  936523 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-445920"
	I0127 02:45:47.368478  936523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 02:45:47.368516  936523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:45:47.368552  936523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 02:45:47.368604  936523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:45:47.369203  936523 out.go:177] * Verifying Kubernetes components...
	I0127 02:45:47.370445  936523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 02:45:47.384404  936523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35039
	I0127 02:45:47.384480  936523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43703
	I0127 02:45:47.384895  936523 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:45:47.385030  936523 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:45:47.385492  936523 main.go:141] libmachine: Using API Version  1
	I0127 02:45:47.385498  936523 main.go:141] libmachine: Using API Version  1
	I0127 02:45:47.385516  936523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:45:47.385522  936523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:45:47.385886  936523 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:45:47.385891  936523 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:45:47.386072  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetState
	I0127 02:45:47.386541  936523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 02:45:47.386597  936523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:45:47.388547  936523 kapi.go:59] client config for test-preload-445920: &rest.Config{Host:"https://192.168.39.65:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20316-897624/.minikube/profiles/test-preload-445920/client.crt", KeyFile:"/home/jenkins/minikube-integration/20316-897624/.minikube/profiles/test-preload-445920/client.key", CAFile:"/home/jenkins/minikube-integration/20316-897624/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c3e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0127 02:45:47.388834  936523 addons.go:238] Setting addon default-storageclass=true in "test-preload-445920"
	W0127 02:45:47.388853  936523 addons.go:247] addon default-storageclass should already be in state true
	I0127 02:45:47.388884  936523 host.go:66] Checking if "test-preload-445920" exists ...
	I0127 02:45:47.389256  936523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 02:45:47.389300  936523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:45:47.403065  936523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44441
	I0127 02:45:47.403718  936523 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:45:47.404345  936523 main.go:141] libmachine: Using API Version  1
	I0127 02:45:47.404368  936523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:45:47.404393  936523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40073
	I0127 02:45:47.404738  936523 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:45:47.404799  936523 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:45:47.404996  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetState
	I0127 02:45:47.405360  936523 main.go:141] libmachine: Using API Version  1
	I0127 02:45:47.405389  936523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:45:47.405711  936523 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:45:47.406510  936523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 02:45:47.406559  936523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:45:47.406754  936523 main.go:141] libmachine: (test-preload-445920) Calling .DriverName
	I0127 02:45:47.408631  936523 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 02:45:47.409907  936523 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 02:45:47.409921  936523 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 02:45:47.409937  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetSSHHostname
	I0127 02:45:47.413135  936523 main.go:141] libmachine: (test-preload-445920) DBG | domain test-preload-445920 has defined MAC address 52:54:00:36:d0:ff in network mk-test-preload-445920
	I0127 02:45:47.413558  936523 main.go:141] libmachine: (test-preload-445920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d0:ff", ip: ""} in network mk-test-preload-445920: {Iface:virbr1 ExpiryTime:2025-01-27 03:44:55 +0000 UTC Type:0 Mac:52:54:00:36:d0:ff Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:test-preload-445920 Clientid:01:52:54:00:36:d0:ff}
	I0127 02:45:47.413605  936523 main.go:141] libmachine: (test-preload-445920) DBG | domain test-preload-445920 has defined IP address 192.168.39.65 and MAC address 52:54:00:36:d0:ff in network mk-test-preload-445920
	I0127 02:45:47.413731  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetSSHPort
	I0127 02:45:47.413903  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetSSHKeyPath
	I0127 02:45:47.414027  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetSSHUsername
	I0127 02:45:47.414202  936523 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/test-preload-445920/id_rsa Username:docker}
	I0127 02:45:47.422509  936523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43433
	I0127 02:45:47.422954  936523 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:45:47.423387  936523 main.go:141] libmachine: Using API Version  1
	I0127 02:45:47.423404  936523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:45:47.423770  936523 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:45:47.423938  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetState
	I0127 02:45:47.425818  936523 main.go:141] libmachine: (test-preload-445920) Calling .DriverName
	I0127 02:45:47.426082  936523 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 02:45:47.426098  936523 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 02:45:47.426116  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetSSHHostname
	I0127 02:45:47.429231  936523 main.go:141] libmachine: (test-preload-445920) DBG | domain test-preload-445920 has defined MAC address 52:54:00:36:d0:ff in network mk-test-preload-445920
	I0127 02:45:47.429679  936523 main.go:141] libmachine: (test-preload-445920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d0:ff", ip: ""} in network mk-test-preload-445920: {Iface:virbr1 ExpiryTime:2025-01-27 03:44:55 +0000 UTC Type:0 Mac:52:54:00:36:d0:ff Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:test-preload-445920 Clientid:01:52:54:00:36:d0:ff}
	I0127 02:45:47.429698  936523 main.go:141] libmachine: (test-preload-445920) DBG | domain test-preload-445920 has defined IP address 192.168.39.65 and MAC address 52:54:00:36:d0:ff in network mk-test-preload-445920
	I0127 02:45:47.429832  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetSSHPort
	I0127 02:45:47.430055  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetSSHKeyPath
	I0127 02:45:47.430222  936523 main.go:141] libmachine: (test-preload-445920) Calling .GetSSHUsername
	I0127 02:45:47.430359  936523 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/test-preload-445920/id_rsa Username:docker}
	I0127 02:45:47.559871  936523 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 02:45:47.576159  936523 node_ready.go:35] waiting up to 6m0s for node "test-preload-445920" to be "Ready" ...
	I0127 02:45:47.646761  936523 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 02:45:47.658586  936523 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 02:45:48.679977  936523 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.021353992s)
	I0127 02:45:48.680038  936523 main.go:141] libmachine: Making call to close driver server
	I0127 02:45:48.680050  936523 main.go:141] libmachine: (test-preload-445920) Calling .Close
	I0127 02:45:48.680115  936523 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.03330793s)
	I0127 02:45:48.680167  936523 main.go:141] libmachine: Making call to close driver server
	I0127 02:45:48.680185  936523 main.go:141] libmachine: (test-preload-445920) Calling .Close
	I0127 02:45:48.680368  936523 main.go:141] libmachine: Successfully made call to close driver server
	I0127 02:45:48.680385  936523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 02:45:48.680394  936523 main.go:141] libmachine: Making call to close driver server
	I0127 02:45:48.680401  936523 main.go:141] libmachine: (test-preload-445920) Calling .Close
	I0127 02:45:48.680537  936523 main.go:141] libmachine: (test-preload-445920) DBG | Closing plugin on server side
	I0127 02:45:48.680540  936523 main.go:141] libmachine: Successfully made call to close driver server
	I0127 02:45:48.680566  936523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 02:45:48.680576  936523 main.go:141] libmachine: Making call to close driver server
	I0127 02:45:48.680587  936523 main.go:141] libmachine: (test-preload-445920) Calling .Close
	I0127 02:45:48.680696  936523 main.go:141] libmachine: Successfully made call to close driver server
	I0127 02:45:48.680715  936523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 02:45:48.680730  936523 main.go:141] libmachine: (test-preload-445920) DBG | Closing plugin on server side
	I0127 02:45:48.680803  936523 main.go:141] libmachine: Successfully made call to close driver server
	I0127 02:45:48.680817  936523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 02:45:48.680841  936523 main.go:141] libmachine: (test-preload-445920) DBG | Closing plugin on server side
	I0127 02:45:48.689444  936523 main.go:141] libmachine: Making call to close driver server
	I0127 02:45:48.689466  936523 main.go:141] libmachine: (test-preload-445920) Calling .Close
	I0127 02:45:48.689733  936523 main.go:141] libmachine: Successfully made call to close driver server
	I0127 02:45:48.689754  936523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 02:45:48.689738  936523 main.go:141] libmachine: (test-preload-445920) DBG | Closing plugin on server side
	I0127 02:45:48.691684  936523 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0127 02:45:48.692891  936523 addons.go:514] duration metric: took 1.325046762s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0127 02:45:49.580277  936523 node_ready.go:53] node "test-preload-445920" has status "Ready":"False"
	I0127 02:45:52.079649  936523 node_ready.go:53] node "test-preload-445920" has status "Ready":"False"
	I0127 02:45:54.079960  936523 node_ready.go:53] node "test-preload-445920" has status "Ready":"False"
	I0127 02:45:55.080023  936523 node_ready.go:49] node "test-preload-445920" has status "Ready":"True"
	I0127 02:45:55.080048  936523 node_ready.go:38] duration metric: took 7.50384976s for node "test-preload-445920" to be "Ready" ...
	I0127 02:45:55.080059  936523 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 02:45:55.085163  936523 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-7whl5" in "kube-system" namespace to be "Ready" ...
	I0127 02:45:55.089686  936523 pod_ready.go:93] pod "coredns-6d4b75cb6d-7whl5" in "kube-system" namespace has status "Ready":"True"
	I0127 02:45:55.089710  936523 pod_ready.go:82] duration metric: took 4.515605ms for pod "coredns-6d4b75cb6d-7whl5" in "kube-system" namespace to be "Ready" ...
	I0127 02:45:55.089719  936523 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-445920" in "kube-system" namespace to be "Ready" ...
	I0127 02:45:57.096867  936523 pod_ready.go:103] pod "etcd-test-preload-445920" in "kube-system" namespace has status "Ready":"False"
	I0127 02:45:59.098988  936523 pod_ready.go:103] pod "etcd-test-preload-445920" in "kube-system" namespace has status "Ready":"False"
	I0127 02:46:00.097216  936523 pod_ready.go:93] pod "etcd-test-preload-445920" in "kube-system" namespace has status "Ready":"True"
	I0127 02:46:00.097242  936523 pod_ready.go:82] duration metric: took 5.007517055s for pod "etcd-test-preload-445920" in "kube-system" namespace to be "Ready" ...
	I0127 02:46:00.097252  936523 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-445920" in "kube-system" namespace to be "Ready" ...
	I0127 02:46:00.105823  936523 pod_ready.go:93] pod "kube-apiserver-test-preload-445920" in "kube-system" namespace has status "Ready":"True"
	I0127 02:46:00.105851  936523 pod_ready.go:82] duration metric: took 8.59291ms for pod "kube-apiserver-test-preload-445920" in "kube-system" namespace to be "Ready" ...
	I0127 02:46:00.105862  936523 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-445920" in "kube-system" namespace to be "Ready" ...
	I0127 02:46:00.110197  936523 pod_ready.go:93] pod "kube-controller-manager-test-preload-445920" in "kube-system" namespace has status "Ready":"True"
	I0127 02:46:00.110222  936523 pod_ready.go:82] duration metric: took 4.353852ms for pod "kube-controller-manager-test-preload-445920" in "kube-system" namespace to be "Ready" ...
	I0127 02:46:00.110231  936523 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-98kzr" in "kube-system" namespace to be "Ready" ...
	I0127 02:46:00.114513  936523 pod_ready.go:93] pod "kube-proxy-98kzr" in "kube-system" namespace has status "Ready":"True"
	I0127 02:46:00.114538  936523 pod_ready.go:82] duration metric: took 4.300183ms for pod "kube-proxy-98kzr" in "kube-system" namespace to be "Ready" ...
	I0127 02:46:00.114546  936523 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-445920" in "kube-system" namespace to be "Ready" ...
	I0127 02:46:00.118932  936523 pod_ready.go:93] pod "kube-scheduler-test-preload-445920" in "kube-system" namespace has status "Ready":"True"
	I0127 02:46:00.118955  936523 pod_ready.go:82] duration metric: took 4.402318ms for pod "kube-scheduler-test-preload-445920" in "kube-system" namespace to be "Ready" ...
	I0127 02:46:00.118964  936523 pod_ready.go:39] duration metric: took 5.038892604s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 02:46:00.118981  936523 api_server.go:52] waiting for apiserver process to appear ...
	I0127 02:46:00.119031  936523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 02:46:00.134035  936523 api_server.go:72] duration metric: took 12.766236551s to wait for apiserver process to appear ...
	I0127 02:46:00.134078  936523 api_server.go:88] waiting for apiserver healthz status ...
	I0127 02:46:00.134102  936523 api_server.go:253] Checking apiserver healthz at https://192.168.39.65:8443/healthz ...
	I0127 02:46:00.139253  936523 api_server.go:279] https://192.168.39.65:8443/healthz returned 200:
	ok
	I0127 02:46:00.140096  936523 api_server.go:141] control plane version: v1.24.4
	I0127 02:46:00.140123  936523 api_server.go:131] duration metric: took 6.036777ms to wait for apiserver health ...
	I0127 02:46:00.140132  936523 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 02:46:00.296157  936523 system_pods.go:59] 7 kube-system pods found
	I0127 02:46:00.296190  936523 system_pods.go:61] "coredns-6d4b75cb6d-7whl5" [f3148a51-8c4d-4e24-9300-0fd5af64287e] Running
	I0127 02:46:00.296195  936523 system_pods.go:61] "etcd-test-preload-445920" [8e416c9c-9e27-45ce-bd58-7339a9128234] Running
	I0127 02:46:00.296199  936523 system_pods.go:61] "kube-apiserver-test-preload-445920" [bdb6e937-f6f9-4c50-aef5-89383b556be7] Running
	I0127 02:46:00.296203  936523 system_pods.go:61] "kube-controller-manager-test-preload-445920" [024e73d0-66c8-45cc-9459-82f8a740b3bb] Running
	I0127 02:46:00.296206  936523 system_pods.go:61] "kube-proxy-98kzr" [70f1e5ae-2c02-4a03-a74f-465d68e132bc] Running
	I0127 02:46:00.296210  936523 system_pods.go:61] "kube-scheduler-test-preload-445920" [f7fbf7be-65df-4b6b-abcb-5d485a77dc84] Running
	I0127 02:46:00.296214  936523 system_pods.go:61] "storage-provisioner" [683e7ae4-acb3-4781-967e-c6dbd794f159] Running
	I0127 02:46:00.296220  936523 system_pods.go:74] duration metric: took 156.077417ms to wait for pod list to return data ...
	I0127 02:46:00.296227  936523 default_sa.go:34] waiting for default service account to be created ...
	I0127 02:46:00.493751  936523 default_sa.go:45] found service account: "default"
	I0127 02:46:00.493786  936523 default_sa.go:55] duration metric: took 197.55081ms for default service account to be created ...
	I0127 02:46:00.493798  936523 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 02:46:00.697658  936523 system_pods.go:87] 7 kube-system pods found
	I0127 02:46:00.895128  936523 system_pods.go:105] "coredns-6d4b75cb6d-7whl5" [f3148a51-8c4d-4e24-9300-0fd5af64287e] Running
	I0127 02:46:00.895157  936523 system_pods.go:105] "etcd-test-preload-445920" [8e416c9c-9e27-45ce-bd58-7339a9128234] Running
	I0127 02:46:00.895175  936523 system_pods.go:105] "kube-apiserver-test-preload-445920" [bdb6e937-f6f9-4c50-aef5-89383b556be7] Running
	I0127 02:46:00.895183  936523 system_pods.go:105] "kube-controller-manager-test-preload-445920" [024e73d0-66c8-45cc-9459-82f8a740b3bb] Running
	I0127 02:46:00.895197  936523 system_pods.go:105] "kube-proxy-98kzr" [70f1e5ae-2c02-4a03-a74f-465d68e132bc] Running
	I0127 02:46:00.895203  936523 system_pods.go:105] "kube-scheduler-test-preload-445920" [f7fbf7be-65df-4b6b-abcb-5d485a77dc84] Running
	I0127 02:46:00.895209  936523 system_pods.go:105] "storage-provisioner" [683e7ae4-acb3-4781-967e-c6dbd794f159] Running
	I0127 02:46:00.895220  936523 system_pods.go:147] duration metric: took 401.414427ms to wait for k8s-apps to be running ...
	I0127 02:46:00.895233  936523 system_svc.go:44] waiting for kubelet service to be running ....
	I0127 02:46:00.895291  936523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 02:46:00.910446  936523 system_svc.go:56] duration metric: took 15.191317ms WaitForService to wait for kubelet
	I0127 02:46:00.910488  936523 kubeadm.go:582] duration metric: took 13.542686307s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 02:46:00.910517  936523 node_conditions.go:102] verifying NodePressure condition ...
	I0127 02:46:01.094176  936523 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 02:46:01.094217  936523 node_conditions.go:123] node cpu capacity is 2
	I0127 02:46:01.094234  936523 node_conditions.go:105] duration metric: took 183.709059ms to run NodePressure ...
	I0127 02:46:01.094250  936523 start.go:241] waiting for startup goroutines ...
	I0127 02:46:01.094261  936523 start.go:246] waiting for cluster config update ...
	I0127 02:46:01.094275  936523 start.go:255] writing updated cluster config ...
	I0127 02:46:01.094613  936523 ssh_runner.go:195] Run: rm -f paused
	I0127 02:46:01.145705  936523 start.go:600] kubectl: 1.32.1, cluster: 1.24.4 (minor skew: 8)
	I0127 02:46:01.147167  936523 out.go:201] 
	W0127 02:46:01.148367  936523 out.go:270] ! /usr/local/bin/kubectl is version 1.32.1, which may have incompatibilities with Kubernetes 1.24.4.
	I0127 02:46:01.149479  936523 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0127 02:46:01.150654  936523 out.go:177] * Done! kubectl is now configured to use "test-preload-445920" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 27 02:46:02 test-preload-445920 crio[670]: time="2025-01-27 02:46:02.045100448Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737945962045081075,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=36c5dd9b-7fee-4a24-9505-03b78e7eb1fb name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 02:46:02 test-preload-445920 crio[670]: time="2025-01-27 02:46:02.045692854Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=54af8f21-6564-4101-8e73-9d99a64b9301 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 02:46:02 test-preload-445920 crio[670]: time="2025-01-27 02:46:02.045771741Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=54af8f21-6564-4101-8e73-9d99a64b9301 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 02:46:02 test-preload-445920 crio[670]: time="2025-01-27 02:46:02.046315745Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9c27fa02962236be4532036a2c20f9af82792f6c539dc7ad8672a964c5b2f069,PodSandboxId:effa27f35c076f20c5e662aee6b0cb06df79ac66927ee7694d4ae8ad845609a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1737945953932811255,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-7whl5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3148a51-8c4d-4e24-9300-0fd5af64287e,},Annotations:map[string]string{io.kubernetes.container.hash: 9209aa0a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0edabb7aff5ef891d199d91f4180f641da14a9ed5778ffc7c9b0f3c0619da0d,PodSandboxId:099c27e1794bcff341bf201edb836f439e18d5901e6acf8a6da7141503cef4bb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737945946922910309,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 683e7ae4-acb3-4781-967e-c6dbd794f159,},Annotations:map[string]string{io.kubernetes.container.hash: 3ad5db7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0495d116684b99827a374ad912098d40b3c03476348db4ce4ae92e49a1b4a903,PodSandboxId:2cfb68316585c8e8de99a7b2df6322594062c6bf053de48adebb3908ac9949b5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1737945946616651071,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-98kzr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70f
1e5ae-2c02-4a03-a74f-465d68e132bc,},Annotations:map[string]string{io.kubernetes.container.hash: d067b3a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8197b1d2061b648f345739b33252b4086d8cf4660e6386b837fbcba417e6c1f7,PodSandboxId:3f609fceb508ec46ab7bf5fa7e90b39c0608886ff6ae554cc1d5b01e505ca779,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1737945945793564095,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-445920,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: c2ce1b5c5eec61b6ce2bfd7652832ed6,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:178fb8f25609e39b6a32b02382f6424b1ee53606f8673be179ae5573f99c9683,PodSandboxId:da97efec37deed104538d036ac15a8a752fde39b8bb29c2820f78f1dc0e86044,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1737945941768572948,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-445920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 979dc8368fc4f8eac6257bdec2a42671,},Annotations:map[string]string{io.kubernetes.container.hash: 714aeb27,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d6867d05e797ea82f475c4ed3bf2694d145ef9615136582c7def4307d824570,PodSandboxId:6ee8a46f1ab8915ead8777705e7f6f734fa229d052ec09a1cd7c31d9e5d518aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1737945939949346394,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-445920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbf1ed078571cc1e998d0b5a0e9d0234,},
Annotations:map[string]string{io.kubernetes.container.hash: 8b9907ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:952c88e9c714217708d69d14f1ac2800b8e3dd0f90bd57761324d1a8f607c0d7,PodSandboxId:26198946afd17b241d4d22b2aabb7d8dae4edcd770efab8c2135cf8663bacf2d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1737945920276853583,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-445920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af43d561d37ec9e57dbd371a2da2afc8,},Annotations
:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32a7c9c3f6568fa1921a84e1bd0095811e714988bd79311b831a9db66ebd3537,PodSandboxId:3f609fceb508ec46ab7bf5fa7e90b39c0608886ff6ae554cc1d5b01e505ca779,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_EXITED,CreatedAt:1737945920237216741,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-445920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2ce1b5c5eec61b6ce2bfd7652832ed
6,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:585b4575765ad3cb800bfdf522749bdb0b5765ee4254e820d12b652b10c2a70d,PodSandboxId:da97efec37deed104538d036ac15a8a752fde39b8bb29c2820f78f1dc0e86044,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_EXITED,CreatedAt:1737945920227307264,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-445920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 979dc8368fc4f8eac6257bdec2a42671,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 714aeb27,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=54af8f21-6564-4101-8e73-9d99a64b9301 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 02:46:02 test-preload-445920 crio[670]: time="2025-01-27 02:46:02.090824608Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c6aaca59-69fd-4692-bc32-e6a7ad02dd25 name=/runtime.v1.RuntimeService/Version
	Jan 27 02:46:02 test-preload-445920 crio[670]: time="2025-01-27 02:46:02.090910065Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c6aaca59-69fd-4692-bc32-e6a7ad02dd25 name=/runtime.v1.RuntimeService/Version
	Jan 27 02:46:02 test-preload-445920 crio[670]: time="2025-01-27 02:46:02.092158683Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=62850615-2b4b-4c46-bee3-0cbf30a381c9 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 02:46:02 test-preload-445920 crio[670]: time="2025-01-27 02:46:02.092686336Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737945962092663700,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=62850615-2b4b-4c46-bee3-0cbf30a381c9 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 02:46:02 test-preload-445920 crio[670]: time="2025-01-27 02:46:02.093267698Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=60c7447b-ad55-4ffb-a771-75ca91906923 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 02:46:02 test-preload-445920 crio[670]: time="2025-01-27 02:46:02.093330985Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=60c7447b-ad55-4ffb-a771-75ca91906923 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 02:46:02 test-preload-445920 crio[670]: time="2025-01-27 02:46:02.093648964Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9c27fa02962236be4532036a2c20f9af82792f6c539dc7ad8672a964c5b2f069,PodSandboxId:effa27f35c076f20c5e662aee6b0cb06df79ac66927ee7694d4ae8ad845609a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1737945953932811255,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-7whl5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3148a51-8c4d-4e24-9300-0fd5af64287e,},Annotations:map[string]string{io.kubernetes.container.hash: 9209aa0a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0edabb7aff5ef891d199d91f4180f641da14a9ed5778ffc7c9b0f3c0619da0d,PodSandboxId:099c27e1794bcff341bf201edb836f439e18d5901e6acf8a6da7141503cef4bb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737945946922910309,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 683e7ae4-acb3-4781-967e-c6dbd794f159,},Annotations:map[string]string{io.kubernetes.container.hash: 3ad5db7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0495d116684b99827a374ad912098d40b3c03476348db4ce4ae92e49a1b4a903,PodSandboxId:2cfb68316585c8e8de99a7b2df6322594062c6bf053de48adebb3908ac9949b5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1737945946616651071,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-98kzr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70f
1e5ae-2c02-4a03-a74f-465d68e132bc,},Annotations:map[string]string{io.kubernetes.container.hash: d067b3a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8197b1d2061b648f345739b33252b4086d8cf4660e6386b837fbcba417e6c1f7,PodSandboxId:3f609fceb508ec46ab7bf5fa7e90b39c0608886ff6ae554cc1d5b01e505ca779,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1737945945793564095,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-445920,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: c2ce1b5c5eec61b6ce2bfd7652832ed6,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:178fb8f25609e39b6a32b02382f6424b1ee53606f8673be179ae5573f99c9683,PodSandboxId:da97efec37deed104538d036ac15a8a752fde39b8bb29c2820f78f1dc0e86044,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1737945941768572948,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-445920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 979dc8368fc4f8eac6257bdec2a42671,},Annotations:map[string]string{io.kubernetes.container.hash: 714aeb27,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d6867d05e797ea82f475c4ed3bf2694d145ef9615136582c7def4307d824570,PodSandboxId:6ee8a46f1ab8915ead8777705e7f6f734fa229d052ec09a1cd7c31d9e5d518aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1737945939949346394,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-445920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbf1ed078571cc1e998d0b5a0e9d0234,},
Annotations:map[string]string{io.kubernetes.container.hash: 8b9907ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:952c88e9c714217708d69d14f1ac2800b8e3dd0f90bd57761324d1a8f607c0d7,PodSandboxId:26198946afd17b241d4d22b2aabb7d8dae4edcd770efab8c2135cf8663bacf2d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1737945920276853583,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-445920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af43d561d37ec9e57dbd371a2da2afc8,},Annotations
:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32a7c9c3f6568fa1921a84e1bd0095811e714988bd79311b831a9db66ebd3537,PodSandboxId:3f609fceb508ec46ab7bf5fa7e90b39c0608886ff6ae554cc1d5b01e505ca779,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_EXITED,CreatedAt:1737945920237216741,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-445920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2ce1b5c5eec61b6ce2bfd7652832ed
6,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:585b4575765ad3cb800bfdf522749bdb0b5765ee4254e820d12b652b10c2a70d,PodSandboxId:da97efec37deed104538d036ac15a8a752fde39b8bb29c2820f78f1dc0e86044,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_EXITED,CreatedAt:1737945920227307264,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-445920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 979dc8368fc4f8eac6257bdec2a42671,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 714aeb27,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=60c7447b-ad55-4ffb-a771-75ca91906923 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 02:46:02 test-preload-445920 crio[670]: time="2025-01-27 02:46:02.134931015Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aa48f609-407b-407b-b861-c949cbe85070 name=/runtime.v1.RuntimeService/Version
	Jan 27 02:46:02 test-preload-445920 crio[670]: time="2025-01-27 02:46:02.135007051Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aa48f609-407b-407b-b861-c949cbe85070 name=/runtime.v1.RuntimeService/Version
	Jan 27 02:46:02 test-preload-445920 crio[670]: time="2025-01-27 02:46:02.136388825Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a8999ee9-6524-4af2-b1be-6a93915de22c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 02:46:02 test-preload-445920 crio[670]: time="2025-01-27 02:46:02.136902163Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737945962136878605,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a8999ee9-6524-4af2-b1be-6a93915de22c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 02:46:02 test-preload-445920 crio[670]: time="2025-01-27 02:46:02.137626678Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=812de807-91e5-4676-9c2e-c330a49f0463 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 02:46:02 test-preload-445920 crio[670]: time="2025-01-27 02:46:02.137693788Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=812de807-91e5-4676-9c2e-c330a49f0463 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 02:46:02 test-preload-445920 crio[670]: time="2025-01-27 02:46:02.137880430Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9c27fa02962236be4532036a2c20f9af82792f6c539dc7ad8672a964c5b2f069,PodSandboxId:effa27f35c076f20c5e662aee6b0cb06df79ac66927ee7694d4ae8ad845609a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1737945953932811255,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-7whl5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3148a51-8c4d-4e24-9300-0fd5af64287e,},Annotations:map[string]string{io.kubernetes.container.hash: 9209aa0a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0edabb7aff5ef891d199d91f4180f641da14a9ed5778ffc7c9b0f3c0619da0d,PodSandboxId:099c27e1794bcff341bf201edb836f439e18d5901e6acf8a6da7141503cef4bb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737945946922910309,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 683e7ae4-acb3-4781-967e-c6dbd794f159,},Annotations:map[string]string{io.kubernetes.container.hash: 3ad5db7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0495d116684b99827a374ad912098d40b3c03476348db4ce4ae92e49a1b4a903,PodSandboxId:2cfb68316585c8e8de99a7b2df6322594062c6bf053de48adebb3908ac9949b5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1737945946616651071,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-98kzr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70f
1e5ae-2c02-4a03-a74f-465d68e132bc,},Annotations:map[string]string{io.kubernetes.container.hash: d067b3a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8197b1d2061b648f345739b33252b4086d8cf4660e6386b837fbcba417e6c1f7,PodSandboxId:3f609fceb508ec46ab7bf5fa7e90b39c0608886ff6ae554cc1d5b01e505ca779,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1737945945793564095,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-445920,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: c2ce1b5c5eec61b6ce2bfd7652832ed6,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:178fb8f25609e39b6a32b02382f6424b1ee53606f8673be179ae5573f99c9683,PodSandboxId:da97efec37deed104538d036ac15a8a752fde39b8bb29c2820f78f1dc0e86044,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1737945941768572948,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-445920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 979dc8368fc4f8eac6257bdec2a42671,},Annotations:map[string]string{io.kubernetes.container.hash: 714aeb27,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d6867d05e797ea82f475c4ed3bf2694d145ef9615136582c7def4307d824570,PodSandboxId:6ee8a46f1ab8915ead8777705e7f6f734fa229d052ec09a1cd7c31d9e5d518aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1737945939949346394,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-445920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbf1ed078571cc1e998d0b5a0e9d0234,},
Annotations:map[string]string{io.kubernetes.container.hash: 8b9907ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:952c88e9c714217708d69d14f1ac2800b8e3dd0f90bd57761324d1a8f607c0d7,PodSandboxId:26198946afd17b241d4d22b2aabb7d8dae4edcd770efab8c2135cf8663bacf2d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1737945920276853583,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-445920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af43d561d37ec9e57dbd371a2da2afc8,},Annotations
:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32a7c9c3f6568fa1921a84e1bd0095811e714988bd79311b831a9db66ebd3537,PodSandboxId:3f609fceb508ec46ab7bf5fa7e90b39c0608886ff6ae554cc1d5b01e505ca779,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_EXITED,CreatedAt:1737945920237216741,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-445920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2ce1b5c5eec61b6ce2bfd7652832ed
6,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:585b4575765ad3cb800bfdf522749bdb0b5765ee4254e820d12b652b10c2a70d,PodSandboxId:da97efec37deed104538d036ac15a8a752fde39b8bb29c2820f78f1dc0e86044,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_EXITED,CreatedAt:1737945920227307264,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-445920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 979dc8368fc4f8eac6257bdec2a42671,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 714aeb27,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=812de807-91e5-4676-9c2e-c330a49f0463 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 02:46:02 test-preload-445920 crio[670]: time="2025-01-27 02:46:02.168967709Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4bce52cb-07a8-4033-a273-2e4d3c2aa857 name=/runtime.v1.RuntimeService/Version
	Jan 27 02:46:02 test-preload-445920 crio[670]: time="2025-01-27 02:46:02.169054313Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4bce52cb-07a8-4033-a273-2e4d3c2aa857 name=/runtime.v1.RuntimeService/Version
	Jan 27 02:46:02 test-preload-445920 crio[670]: time="2025-01-27 02:46:02.170304462Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4422da25-7bfd-489e-ad70-990b8f267544 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 02:46:02 test-preload-445920 crio[670]: time="2025-01-27 02:46:02.170925145Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737945962170901064,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4422da25-7bfd-489e-ad70-990b8f267544 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 02:46:02 test-preload-445920 crio[670]: time="2025-01-27 02:46:02.171449958Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6b4f523a-539a-4595-af46-ab1fbe337870 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 02:46:02 test-preload-445920 crio[670]: time="2025-01-27 02:46:02.171503964Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6b4f523a-539a-4595-af46-ab1fbe337870 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 02:46:02 test-preload-445920 crio[670]: time="2025-01-27 02:46:02.171767470Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9c27fa02962236be4532036a2c20f9af82792f6c539dc7ad8672a964c5b2f069,PodSandboxId:effa27f35c076f20c5e662aee6b0cb06df79ac66927ee7694d4ae8ad845609a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1737945953932811255,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-7whl5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3148a51-8c4d-4e24-9300-0fd5af64287e,},Annotations:map[string]string{io.kubernetes.container.hash: 9209aa0a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0edabb7aff5ef891d199d91f4180f641da14a9ed5778ffc7c9b0f3c0619da0d,PodSandboxId:099c27e1794bcff341bf201edb836f439e18d5901e6acf8a6da7141503cef4bb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737945946922910309,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 683e7ae4-acb3-4781-967e-c6dbd794f159,},Annotations:map[string]string{io.kubernetes.container.hash: 3ad5db7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0495d116684b99827a374ad912098d40b3c03476348db4ce4ae92e49a1b4a903,PodSandboxId:2cfb68316585c8e8de99a7b2df6322594062c6bf053de48adebb3908ac9949b5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1737945946616651071,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-98kzr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70f
1e5ae-2c02-4a03-a74f-465d68e132bc,},Annotations:map[string]string{io.kubernetes.container.hash: d067b3a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8197b1d2061b648f345739b33252b4086d8cf4660e6386b837fbcba417e6c1f7,PodSandboxId:3f609fceb508ec46ab7bf5fa7e90b39c0608886ff6ae554cc1d5b01e505ca779,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1737945945793564095,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-445920,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: c2ce1b5c5eec61b6ce2bfd7652832ed6,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:178fb8f25609e39b6a32b02382f6424b1ee53606f8673be179ae5573f99c9683,PodSandboxId:da97efec37deed104538d036ac15a8a752fde39b8bb29c2820f78f1dc0e86044,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1737945941768572948,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-445920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 979dc8368fc4f8eac6257bdec2a42671,},Annotations:map[string]string{io.kubernetes.container.hash: 714aeb27,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d6867d05e797ea82f475c4ed3bf2694d145ef9615136582c7def4307d824570,PodSandboxId:6ee8a46f1ab8915ead8777705e7f6f734fa229d052ec09a1cd7c31d9e5d518aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1737945939949346394,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-445920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbf1ed078571cc1e998d0b5a0e9d0234,},
Annotations:map[string]string{io.kubernetes.container.hash: 8b9907ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:952c88e9c714217708d69d14f1ac2800b8e3dd0f90bd57761324d1a8f607c0d7,PodSandboxId:26198946afd17b241d4d22b2aabb7d8dae4edcd770efab8c2135cf8663bacf2d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1737945920276853583,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-445920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af43d561d37ec9e57dbd371a2da2afc8,},Annotations
:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32a7c9c3f6568fa1921a84e1bd0095811e714988bd79311b831a9db66ebd3537,PodSandboxId:3f609fceb508ec46ab7bf5fa7e90b39c0608886ff6ae554cc1d5b01e505ca779,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_EXITED,CreatedAt:1737945920237216741,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-445920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2ce1b5c5eec61b6ce2bfd7652832ed
6,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:585b4575765ad3cb800bfdf522749bdb0b5765ee4254e820d12b652b10c2a70d,PodSandboxId:da97efec37deed104538d036ac15a8a752fde39b8bb29c2820f78f1dc0e86044,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_EXITED,CreatedAt:1737945920227307264,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-445920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 979dc8368fc4f8eac6257bdec2a42671,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 714aeb27,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6b4f523a-539a-4595-af46-ab1fbe337870 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9c27fa0296223       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   8 seconds ago       Running             coredns                   1                   effa27f35c076       coredns-6d4b75cb6d-7whl5
	e0edabb7aff5e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 seconds ago      Running             storage-provisioner       1                   099c27e1794bc       storage-provisioner
	0495d116684b9       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   15 seconds ago      Running             kube-proxy                1                   2cfb68316585c       kube-proxy-98kzr
	8197b1d2061b6       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   16 seconds ago      Running             kube-controller-manager   2                   3f609fceb508e       kube-controller-manager-test-preload-445920
	178fb8f25609e       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   20 seconds ago      Running             kube-apiserver            2                   da97efec37dee       kube-apiserver-test-preload-445920
	0d6867d05e797       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   22 seconds ago      Running             etcd                      1                   6ee8a46f1ab89       etcd-test-preload-445920
	952c88e9c7142       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   41 seconds ago      Running             kube-scheduler            1                   26198946afd17       kube-scheduler-test-preload-445920
	32a7c9c3f6568       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   42 seconds ago      Exited              kube-controller-manager   1                   3f609fceb508e       kube-controller-manager-test-preload-445920
	585b4575765ad       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   42 seconds ago      Exited              kube-apiserver            1                   da97efec37dee       kube-apiserver-test-preload-445920
	
	
	==> coredns [9c27fa02962236be4532036a2c20f9af82792f6c539dc7ad8672a964c5b2f069] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:44957 - 39517 "HINFO IN 2886408219301634657.2208090702114797993. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011084116s
	
	
	==> describe nodes <==
	Name:               test-preload-445920
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-445920
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6bb462d349d93b9bf1c5a4f87817e5e9ea11cc95
	                    minikube.k8s.io/name=test-preload-445920
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T02_43_25_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 02:43:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-445920
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 02:45:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 02:45:55 +0000   Mon, 27 Jan 2025 02:43:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 02:45:55 +0000   Mon, 27 Jan 2025 02:43:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 02:45:55 +0000   Mon, 27 Jan 2025 02:43:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 02:45:55 +0000   Mon, 27 Jan 2025 02:45:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.65
	  Hostname:    test-preload-445920
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 69bb3e2e1017464f97471a23fc7767a2
	  System UUID:                69bb3e2e-1017-464f-9747-1a23fc7767a2
	  Boot ID:                    a0134416-cf48-45f7-9924-489a6da2d306
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-7whl5                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     2m24s
	  kube-system                 etcd-test-preload-445920                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         2m38s
	  kube-system                 kube-apiserver-test-preload-445920             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 kube-controller-manager-test-preload-445920    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m36s
	  kube-system                 kube-proxy-98kzr                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-scheduler-test-preload-445920             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 15s                    kube-proxy       
	  Normal  Starting                 2m22s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m45s (x5 over 2m45s)  kubelet          Node test-preload-445920 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m45s (x4 over 2m45s)  kubelet          Node test-preload-445920 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m45s (x4 over 2m45s)  kubelet          Node test-preload-445920 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m37s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m37s                  kubelet          Node test-preload-445920 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m37s                  kubelet          Node test-preload-445920 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m37s                  kubelet          Node test-preload-445920 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m27s                  kubelet          Node test-preload-445920 status is now: NodeReady
	  Normal  RegisteredNode           2m25s                  node-controller  Node test-preload-445920 event: Registered Node test-preload-445920 in Controller
	  Normal  Starting                 43s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  43s (x8 over 43s)      kubelet          Node test-preload-445920 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s (x8 over 43s)      kubelet          Node test-preload-445920 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s (x7 over 43s)      kubelet          Node test-preload-445920 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  43s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4s                     node-controller  Node test-preload-445920 event: Registered Node test-preload-445920 in Controller
	
	
	==> dmesg <==
	[Jan27 02:44] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053070] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037500] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.887368] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.036367] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.515493] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jan27 02:45] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.059587] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062096] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.187193] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.118418] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.264547] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[ +12.588264] systemd-fstab-generator[988]: Ignoring "noauto" option for root device
	[  +0.056855] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.564618] systemd-fstab-generator[1115]: Ignoring "noauto" option for root device
	[  +6.103010] kauditd_printk_skb: 95 callbacks suppressed
	[ +20.373240] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.603564] systemd-fstab-generator[1864]: Ignoring "noauto" option for root device
	[  +6.257788] kauditd_printk_skb: 55 callbacks suppressed
	
	
	==> etcd [0d6867d05e797ea82f475c4ed3bf2694d145ef9615136582c7def4307d824570] <==
	{"level":"info","ts":"2025-01-27T02:45:40.083Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"c17fb7325889e027","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2025-01-27T02:45:40.084Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-01-27T02:45:40.084Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c17fb7325889e027 switched to configuration voters=(13943064398224023591)"}
	{"level":"info","ts":"2025-01-27T02:45:40.084Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f0d16ed1ce05ac0e","local-member-id":"c17fb7325889e027","added-peer-id":"c17fb7325889e027","added-peer-peer-urls":["https://192.168.39.65:2380"]}
	{"level":"info","ts":"2025-01-27T02:45:40.084Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f0d16ed1ce05ac0e","local-member-id":"c17fb7325889e027","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T02:45:40.084Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T02:45:40.086Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-01-27T02:45:40.086Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"c17fb7325889e027","initial-advertise-peer-urls":["https://192.168.39.65:2380"],"listen-peer-urls":["https://192.168.39.65:2380"],"advertise-client-urls":["https://192.168.39.65:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.65:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-01-27T02:45:40.086Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-01-27T02:45:40.087Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.65:2380"}
	{"level":"info","ts":"2025-01-27T02:45:40.087Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.65:2380"}
	{"level":"info","ts":"2025-01-27T02:45:41.871Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c17fb7325889e027 is starting a new election at term 2"}
	{"level":"info","ts":"2025-01-27T02:45:41.871Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c17fb7325889e027 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-01-27T02:45:41.871Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c17fb7325889e027 received MsgPreVoteResp from c17fb7325889e027 at term 2"}
	{"level":"info","ts":"2025-01-27T02:45:41.871Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c17fb7325889e027 became candidate at term 3"}
	{"level":"info","ts":"2025-01-27T02:45:41.871Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c17fb7325889e027 received MsgVoteResp from c17fb7325889e027 at term 3"}
	{"level":"info","ts":"2025-01-27T02:45:41.871Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c17fb7325889e027 became leader at term 3"}
	{"level":"info","ts":"2025-01-27T02:45:41.871Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c17fb7325889e027 elected leader c17fb7325889e027 at term 3"}
	{"level":"info","ts":"2025-01-27T02:45:41.871Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"c17fb7325889e027","local-member-attributes":"{Name:test-preload-445920 ClientURLs:[https://192.168.39.65:2379]}","request-path":"/0/members/c17fb7325889e027/attributes","cluster-id":"f0d16ed1ce05ac0e","publish-timeout":"7s"}
	{"level":"info","ts":"2025-01-27T02:45:41.871Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T02:45:41.872Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T02:45:41.873Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.65:2379"}
	{"level":"info","ts":"2025-01-27T02:45:41.873Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-01-27T02:45:41.873Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-01-27T02:45:41.873Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 02:46:02 up 1 min,  0 users,  load average: 0.77, 0.26, 0.09
	Linux test-preload-445920 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [178fb8f25609e39b6a32b02382f6424b1ee53606f8673be179ae5573f99c9683] <==
	I0127 02:45:44.677889       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0127 02:45:44.678484       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0127 02:45:44.678565       1 shared_informer.go:255] Waiting for caches to sync for cluster_authentication_trust_controller
	I0127 02:45:44.681856       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0127 02:45:44.737134       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0127 02:45:44.753854       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E0127 02:45:44.814936       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0127 02:45:44.852104       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0127 02:45:44.855740       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0127 02:45:44.862230       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0127 02:45:44.862616       1 cache.go:39] Caches are synced for autoregister controller
	I0127 02:45:44.868198       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0127 02:45:44.877952       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0127 02:45:44.878868       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0127 02:45:45.338712       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0127 02:45:45.659659       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0127 02:45:45.813015       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0127 02:45:46.309509       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0127 02:45:46.317650       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0127 02:45:46.355461       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0127 02:45:46.379086       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0127 02:45:46.389283       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0127 02:45:47.111832       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0127 02:45:58.425086       1 controller.go:611] quota admission added evaluator for: endpoints
	I0127 02:45:58.476867       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [585b4575765ad3cb800bfdf522749bdb0b5765ee4254e820d12b652b10c2a70d] <==
	I0127 02:45:20.879568       1 server.go:558] external host was not specified, using 192.168.39.65
	I0127 02:45:20.884589       1 server.go:158] Version: v1.24.4
	I0127 02:45:20.884648       1 server.go:160] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 02:45:21.347729       1 shared_informer.go:255] Waiting for caches to sync for node_authorizer
	I0127 02:45:21.348607       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0127 02:45:21.348717       1 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
	I0127 02:45:21.350798       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0127 02:45:21.350872       1 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
	W0127 02:45:21.365720       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0127 02:45:22.328020       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0127 02:45:22.366413       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0127 02:45:23.329290       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0127 02:45:23.746290       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0127 02:45:25.097501       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0127 02:45:26.144201       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0127 02:45:27.662862       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0127 02:45:30.083741       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0127 02:45:31.230275       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0127 02:45:36.492305       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0127 02:45:38.003961       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	E0127 02:45:41.364945       1 run.go:74] "command failed" err="context deadline exceeded"
	
	
	==> kube-controller-manager [32a7c9c3f6568fa1921a84e1bd0095811e714988bd79311b831a9db66ebd3537] <==
		vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:190 +0x2f6
	k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicFileCAContent).Run.func1()
		vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:165 +0x3c
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x0?)
		vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x3e
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x68e500?, {0x4d010e0, 0xc000ca8690}, 0x1, 0xc0000bca20)
		vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xb6
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000130008?, 0xdf8475800, 0x0, 0x40?, 0xc000495560?)
		vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x89
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0x0?, 0xc0001014a0?, 0x0?)
		vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x25
	created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicFileCAContent).Run
		vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:164 +0x372
	
	goroutine 148 [syscall]:
	syscall.Syscall6(0xe8, 0xe, 0xc000da9c14, 0x7, 0xffffffffffffffff, 0x0, 0x0)
		/usr/local/go/src/syscall/asm_linux_amd64.s:43 +0x5
	k8s.io/kubernetes/vendor/golang.org/x/sys/unix.EpollWait(0xabf9ff757940f993?, {0xc000da9c14?, 0xf2937bf8e793cc80?, 0xce75166cc07a21dd?}, 0x8441ca0febdd4cb0?)
		vendor/golang.org/x/sys/unix/zsyscall_linux_amd64.go:56 +0x58
	k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*fdPoller).wait(0xc0003458e0)
		vendor/github.com/fsnotify/fsnotify/inotify_poller.go:86 +0x7d
	k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*Watcher).readEvents(0xc000356b90)
		vendor/github.com/fsnotify/fsnotify/inotify.go:192 +0x26e
	created by k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.NewWatcher
		vendor/github.com/fsnotify/fsnotify/inotify.go:59 +0x1c5
	
	
	==> kube-controller-manager [8197b1d2061b648f345739b33252b4086d8cf4660e6386b837fbcba417e6c1f7] <==
	I0127 02:45:58.252270       1 shared_informer.go:262] Caches are synced for attach detach
	I0127 02:45:58.253620       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0127 02:45:58.257074       1 shared_informer.go:262] Caches are synced for endpoint
	I0127 02:45:58.258229       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0127 02:45:58.258264       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0127 02:45:58.259368       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0127 02:45:58.259402       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0127 02:45:58.260634       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0127 02:45:58.272631       1 shared_informer.go:262] Caches are synced for PVC protection
	I0127 02:45:58.274923       1 shared_informer.go:262] Caches are synced for ephemeral
	I0127 02:45:58.283630       1 shared_informer.go:262] Caches are synced for service account
	I0127 02:45:58.290873       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0127 02:45:58.297617       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0127 02:45:58.334297       1 shared_informer.go:262] Caches are synced for daemon sets
	I0127 02:45:58.394432       1 shared_informer.go:262] Caches are synced for taint
	I0127 02:45:58.394781       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0127 02:45:58.395260       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0127 02:45:58.395412       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-445920. Assuming now as a timestamp.
	I0127 02:45:58.395476       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0127 02:45:58.395888       1 event.go:294] "Event occurred" object="test-preload-445920" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-445920 event: Registered Node test-preload-445920 in Controller"
	I0127 02:45:58.480484       1 shared_informer.go:262] Caches are synced for resource quota
	I0127 02:45:58.485589       1 shared_informer.go:262] Caches are synced for resource quota
	I0127 02:45:58.917871       1 shared_informer.go:262] Caches are synced for garbage collector
	I0127 02:45:58.929170       1 shared_informer.go:262] Caches are synced for garbage collector
	I0127 02:45:58.929304       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [0495d116684b99827a374ad912098d40b3c03476348db4ce4ae92e49a1b4a903] <==
	I0127 02:45:47.009439       1 node.go:163] Successfully retrieved node IP: 192.168.39.65
	I0127 02:45:47.009803       1 server_others.go:138] "Detected node IP" address="192.168.39.65"
	I0127 02:45:47.009917       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0127 02:45:47.102245       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0127 02:45:47.102317       1 server_others.go:206] "Using iptables Proxier"
	I0127 02:45:47.102408       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0127 02:45:47.103249       1 server.go:661] "Version info" version="v1.24.4"
	I0127 02:45:47.103296       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 02:45:47.104857       1 config.go:317] "Starting service config controller"
	I0127 02:45:47.104933       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0127 02:45:47.104971       1 config.go:226] "Starting endpoint slice config controller"
	I0127 02:45:47.104988       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0127 02:45:47.106296       1 config.go:444] "Starting node config controller"
	I0127 02:45:47.107882       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0127 02:45:47.206965       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0127 02:45:47.207077       1 shared_informer.go:262] Caches are synced for service config
	I0127 02:45:47.208172       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [952c88e9c714217708d69d14f1ac2800b8e3dd0f90bd57761324d1a8f607c0d7] <==
	I0127 02:45:20.989894       1 serving.go:348] Generated self-signed cert in-memory
	W0127 02:45:31.499398       1 authentication.go:346] Error looking up in-cluster authentication configuration: Get "https://192.168.39.65:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0127 02:45:31.499480       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0127 02:45:31.499491       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0127 02:45:44.798786       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0127 02:45:44.798826       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 02:45:44.812240       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0127 02:45:44.812299       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 02:45:44.813584       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0127 02:45:44.815884       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0127 02:45:44.912571       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 02:45:45 test-preload-445920 kubelet[1122]: I0127 02:45:45.597139    1122 apiserver.go:52] "Watching apiserver"
	Jan 27 02:45:45 test-preload-445920 kubelet[1122]: I0127 02:45:45.603465    1122 topology_manager.go:200] "Topology Admit Handler"
	Jan 27 02:45:45 test-preload-445920 kubelet[1122]: I0127 02:45:45.603612    1122 topology_manager.go:200] "Topology Admit Handler"
	Jan 27 02:45:45 test-preload-445920 kubelet[1122]: I0127 02:45:45.603660    1122 topology_manager.go:200] "Topology Admit Handler"
	Jan 27 02:45:45 test-preload-445920 kubelet[1122]: E0127 02:45:45.606181    1122 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-7whl5" podUID=f3148a51-8c4d-4e24-9300-0fd5af64287e
	Jan 27 02:45:45 test-preload-445920 kubelet[1122]: I0127 02:45:45.779974    1122 scope.go:110] "RemoveContainer" containerID="32a7c9c3f6568fa1921a84e1bd0095811e714988bd79311b831a9db66ebd3537"
	Jan 27 02:45:45 test-preload-445920 kubelet[1122]: I0127 02:45:45.783032    1122 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrnw9\" (UniqueName: \"kubernetes.io/projected/f3148a51-8c4d-4e24-9300-0fd5af64287e-kube-api-access-rrnw9\") pod \"coredns-6d4b75cb6d-7whl5\" (UID: \"f3148a51-8c4d-4e24-9300-0fd5af64287e\") " pod="kube-system/coredns-6d4b75cb6d-7whl5"
	Jan 27 02:45:45 test-preload-445920 kubelet[1122]: I0127 02:45:45.783116    1122 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/683e7ae4-acb3-4781-967e-c6dbd794f159-tmp\") pod \"storage-provisioner\" (UID: \"683e7ae4-acb3-4781-967e-c6dbd794f159\") " pod="kube-system/storage-provisioner"
	Jan 27 02:45:45 test-preload-445920 kubelet[1122]: I0127 02:45:45.783156    1122 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3148a51-8c4d-4e24-9300-0fd5af64287e-config-volume\") pod \"coredns-6d4b75cb6d-7whl5\" (UID: \"f3148a51-8c4d-4e24-9300-0fd5af64287e\") " pod="kube-system/coredns-6d4b75cb6d-7whl5"
	Jan 27 02:45:45 test-preload-445920 kubelet[1122]: I0127 02:45:45.783222    1122 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/70f1e5ae-2c02-4a03-a74f-465d68e132bc-kube-proxy\") pod \"kube-proxy-98kzr\" (UID: \"70f1e5ae-2c02-4a03-a74f-465d68e132bc\") " pod="kube-system/kube-proxy-98kzr"
	Jan 27 02:45:45 test-preload-445920 kubelet[1122]: I0127 02:45:45.783241    1122 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70f1e5ae-2c02-4a03-a74f-465d68e132bc-lib-modules\") pod \"kube-proxy-98kzr\" (UID: \"70f1e5ae-2c02-4a03-a74f-465d68e132bc\") " pod="kube-system/kube-proxy-98kzr"
	Jan 27 02:45:45 test-preload-445920 kubelet[1122]: I0127 02:45:45.783301    1122 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjx6z\" (UniqueName: \"kubernetes.io/projected/70f1e5ae-2c02-4a03-a74f-465d68e132bc-kube-api-access-qjx6z\") pod \"kube-proxy-98kzr\" (UID: \"70f1e5ae-2c02-4a03-a74f-465d68e132bc\") " pod="kube-system/kube-proxy-98kzr"
	Jan 27 02:45:45 test-preload-445920 kubelet[1122]: I0127 02:45:45.783335    1122 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg2xr\" (UniqueName: \"kubernetes.io/projected/683e7ae4-acb3-4781-967e-c6dbd794f159-kube-api-access-jg2xr\") pod \"storage-provisioner\" (UID: \"683e7ae4-acb3-4781-967e-c6dbd794f159\") " pod="kube-system/storage-provisioner"
	Jan 27 02:45:45 test-preload-445920 kubelet[1122]: I0127 02:45:45.783399    1122 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70f1e5ae-2c02-4a03-a74f-465d68e132bc-xtables-lock\") pod \"kube-proxy-98kzr\" (UID: \"70f1e5ae-2c02-4a03-a74f-465d68e132bc\") " pod="kube-system/kube-proxy-98kzr"
	Jan 27 02:45:45 test-preload-445920 kubelet[1122]: I0127 02:45:45.783435    1122 reconciler.go:159] "Reconciler: start to sync state"
	Jan 27 02:45:45 test-preload-445920 kubelet[1122]: E0127 02:45:45.892109    1122 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 27 02:45:45 test-preload-445920 kubelet[1122]: E0127 02:45:45.892213    1122 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/f3148a51-8c4d-4e24-9300-0fd5af64287e-config-volume podName:f3148a51-8c4d-4e24-9300-0fd5af64287e nodeName:}" failed. No retries permitted until 2025-01-27 02:45:46.392183813 +0000 UTC m=+26.963707692 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f3148a51-8c4d-4e24-9300-0fd5af64287e-config-volume") pod "coredns-6d4b75cb6d-7whl5" (UID: "f3148a51-8c4d-4e24-9300-0fd5af64287e") : object "kube-system"/"coredns" not registered
	Jan 27 02:45:46 test-preload-445920 kubelet[1122]: E0127 02:45:46.396277    1122 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 27 02:45:46 test-preload-445920 kubelet[1122]: E0127 02:45:46.396358    1122 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/f3148a51-8c4d-4e24-9300-0fd5af64287e-config-volume podName:f3148a51-8c4d-4e24-9300-0fd5af64287e nodeName:}" failed. No retries permitted until 2025-01-27 02:45:47.396342153 +0000 UTC m=+27.967866042 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f3148a51-8c4d-4e24-9300-0fd5af64287e-config-volume") pod "coredns-6d4b75cb6d-7whl5" (UID: "f3148a51-8c4d-4e24-9300-0fd5af64287e") : object "kube-system"/"coredns" not registered
	Jan 27 02:45:46 test-preload-445920 kubelet[1122]: E0127 02:45:46.674426    1122 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-7whl5" podUID=f3148a51-8c4d-4e24-9300-0fd5af64287e
	Jan 27 02:45:47 test-preload-445920 kubelet[1122]: E0127 02:45:47.402896    1122 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 27 02:45:47 test-preload-445920 kubelet[1122]: E0127 02:45:47.402991    1122 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/f3148a51-8c4d-4e24-9300-0fd5af64287e-config-volume podName:f3148a51-8c4d-4e24-9300-0fd5af64287e nodeName:}" failed. No retries permitted until 2025-01-27 02:45:49.402973343 +0000 UTC m=+29.974497232 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f3148a51-8c4d-4e24-9300-0fd5af64287e-config-volume") pod "coredns-6d4b75cb6d-7whl5" (UID: "f3148a51-8c4d-4e24-9300-0fd5af64287e") : object "kube-system"/"coredns" not registered
	Jan 27 02:45:48 test-preload-445920 kubelet[1122]: E0127 02:45:48.678321    1122 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-7whl5" podUID=f3148a51-8c4d-4e24-9300-0fd5af64287e
	Jan 27 02:45:49 test-preload-445920 kubelet[1122]: E0127 02:45:49.424497    1122 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 27 02:45:49 test-preload-445920 kubelet[1122]: E0127 02:45:49.424733    1122 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/f3148a51-8c4d-4e24-9300-0fd5af64287e-config-volume podName:f3148a51-8c4d-4e24-9300-0fd5af64287e nodeName:}" failed. No retries permitted until 2025-01-27 02:45:53.424713631 +0000 UTC m=+33.996237509 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f3148a51-8c4d-4e24-9300-0fd5af64287e-config-volume") pod "coredns-6d4b75cb6d-7whl5" (UID: "f3148a51-8c4d-4e24-9300-0fd5af64287e") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [e0edabb7aff5ef891d199d91f4180f641da14a9ed5778ffc7c9b0f3c0619da0d] <==
	I0127 02:45:47.071705       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-445920 -n test-preload-445920
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-445920 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-445920" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-445920
--- FAIL: TestPreload (227.28s)

                                                
                                    
x
+
TestKubernetesUpgrade (722.78s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-080871 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-080871 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m48.214634746s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-080871] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20316
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20316-897624/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-897624/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-080871" primary control-plane node in "kubernetes-upgrade-080871" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 02:50:47.285070  940744 out.go:345] Setting OutFile to fd 1 ...
	I0127 02:50:47.285199  940744 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:50:47.285210  940744 out.go:358] Setting ErrFile to fd 2...
	I0127 02:50:47.285217  940744 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:50:47.285444  940744 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-897624/.minikube/bin
	I0127 02:50:47.286127  940744 out.go:352] Setting JSON to false
	I0127 02:50:47.287286  940744 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":12790,"bootTime":1737933457,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 02:50:47.287404  940744 start.go:139] virtualization: kvm guest
	I0127 02:50:47.289473  940744 out.go:177] * [kubernetes-upgrade-080871] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 02:50:47.291246  940744 notify.go:220] Checking for updates...
	I0127 02:50:47.291287  940744 out.go:177]   - MINIKUBE_LOCATION=20316
	I0127 02:50:47.292551  940744 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 02:50:47.293991  940744 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20316-897624/kubeconfig
	I0127 02:50:47.295295  940744 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-897624/.minikube
	I0127 02:50:47.296633  940744 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 02:50:47.297822  940744 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 02:50:47.299731  940744 config.go:182] Loaded profile config "NoKubernetes-954952": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0127 02:50:47.299896  940744 config.go:182] Loaded profile config "running-upgrade-078958": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0127 02:50:47.300035  940744 config.go:182] Loaded profile config "stopped-upgrade-883403": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0127 02:50:47.300171  940744 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 02:50:47.340580  940744 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 02:50:47.341712  940744 start.go:297] selected driver: kvm2
	I0127 02:50:47.341733  940744 start.go:901] validating driver "kvm2" against <nil>
	I0127 02:50:47.341749  940744 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 02:50:47.342804  940744 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 02:50:47.342927  940744 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20316-897624/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 02:50:47.360570  940744 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 02:50:47.360635  940744 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 02:50:47.360961  940744 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0127 02:50:47.360992  940744 cni.go:84] Creating CNI manager for ""
	I0127 02:50:47.361040  940744 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 02:50:47.361050  940744 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 02:50:47.361115  940744 start.go:340] cluster config:
	{Name:kubernetes-upgrade-080871 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-080871 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 02:50:47.361218  940744 iso.go:125] acquiring lock: {Name:mkbbe08fa3daa2372045069667a0a52e0e34abd9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 02:50:47.362876  940744 out.go:177] * Starting "kubernetes-upgrade-080871" primary control-plane node in "kubernetes-upgrade-080871" cluster
	I0127 02:50:47.364117  940744 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 02:50:47.364178  940744 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0127 02:50:47.364201  940744 cache.go:56] Caching tarball of preloaded images
	I0127 02:50:47.364315  940744 preload.go:172] Found /home/jenkins/minikube-integration/20316-897624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 02:50:47.364329  940744 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0127 02:50:47.364441  940744 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/kubernetes-upgrade-080871/config.json ...
	I0127 02:50:47.364463  940744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/kubernetes-upgrade-080871/config.json: {Name:mk28c61a81a6e49b82dc859d028a54ec5bd8b630 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:50:47.364597  940744 start.go:360] acquireMachinesLock for kubernetes-upgrade-080871: {Name:mke71211974c740845fb9b00041df14f4e3ecd74 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 02:51:03.013609  940744 start.go:364] duration metric: took 15.648972099s to acquireMachinesLock for "kubernetes-upgrade-080871"
	I0127 02:51:03.013656  940744 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-080871 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernete
s-upgrade-080871 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 02:51:03.013773  940744 start.go:125] createHost starting for "" (driver="kvm2")
	I0127 02:51:03.016649  940744 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0127 02:51:03.016940  940744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 02:51:03.016999  940744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:51:03.034088  940744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38257
	I0127 02:51:03.034507  940744 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:51:03.035064  940744 main.go:141] libmachine: Using API Version  1
	I0127 02:51:03.035085  940744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:51:03.035417  940744 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:51:03.035651  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetMachineName
	I0127 02:51:03.035793  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .DriverName
	I0127 02:51:03.035947  940744 start.go:159] libmachine.API.Create for "kubernetes-upgrade-080871" (driver="kvm2")
	I0127 02:51:03.035982  940744 client.go:168] LocalClient.Create starting
	I0127 02:51:03.036017  940744 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem
	I0127 02:51:03.036062  940744 main.go:141] libmachine: Decoding PEM data...
	I0127 02:51:03.036082  940744 main.go:141] libmachine: Parsing certificate...
	I0127 02:51:03.036163  940744 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20316-897624/.minikube/certs/cert.pem
	I0127 02:51:03.036195  940744 main.go:141] libmachine: Decoding PEM data...
	I0127 02:51:03.036219  940744 main.go:141] libmachine: Parsing certificate...
	I0127 02:51:03.036248  940744 main.go:141] libmachine: Running pre-create checks...
	I0127 02:51:03.036260  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .PreCreateCheck
	I0127 02:51:03.036698  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetConfigRaw
	I0127 02:51:03.037208  940744 main.go:141] libmachine: Creating machine...
	I0127 02:51:03.037229  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .Create
	I0127 02:51:03.037361  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) creating KVM machine...
	I0127 02:51:03.037385  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) creating network...
	I0127 02:51:03.038644  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | found existing default KVM network
	I0127 02:51:03.039884  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | I0127 02:51:03.039704  940934 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:36:29:a3} reservation:<nil>}
	I0127 02:51:03.041227  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | I0127 02:51:03.041144  940934 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a07b0}
	I0127 02:51:03.041253  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | created network xml: 
	I0127 02:51:03.041265  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | <network>
	I0127 02:51:03.041280  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG |   <name>mk-kubernetes-upgrade-080871</name>
	I0127 02:51:03.041307  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG |   <dns enable='no'/>
	I0127 02:51:03.041325  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG |   
	I0127 02:51:03.041336  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0127 02:51:03.041358  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG |     <dhcp>
	I0127 02:51:03.041372  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0127 02:51:03.041380  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG |     </dhcp>
	I0127 02:51:03.041392  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG |   </ip>
	I0127 02:51:03.041411  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG |   
	I0127 02:51:03.041423  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | </network>
	I0127 02:51:03.041438  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | 
	I0127 02:51:03.046714  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | trying to create private KVM network mk-kubernetes-upgrade-080871 192.168.50.0/24...
	I0127 02:51:03.125477  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | private KVM network mk-kubernetes-upgrade-080871 192.168.50.0/24 created
	I0127 02:51:03.125515  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | I0127 02:51:03.125474  940934 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20316-897624/.minikube
	I0127 02:51:03.125528  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) setting up store path in /home/jenkins/minikube-integration/20316-897624/.minikube/machines/kubernetes-upgrade-080871 ...
	I0127 02:51:03.125565  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) building disk image from file:///home/jenkins/minikube-integration/20316-897624/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 02:51:03.125725  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Downloading /home/jenkins/minikube-integration/20316-897624/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20316-897624/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0127 02:51:03.429550  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | I0127 02:51:03.429407  940934 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/kubernetes-upgrade-080871/id_rsa...
	I0127 02:51:04.215550  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | I0127 02:51:04.215445  940934 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/kubernetes-upgrade-080871/kubernetes-upgrade-080871.rawdisk...
	I0127 02:51:04.215577  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | Writing magic tar header
	I0127 02:51:04.215590  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | Writing SSH key tar header
	I0127 02:51:04.215598  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | I0127 02:51:04.215556  940934 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20316-897624/.minikube/machines/kubernetes-upgrade-080871 ...
	I0127 02:51:04.215694  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/kubernetes-upgrade-080871
	I0127 02:51:04.215729  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20316-897624/.minikube/machines
	I0127 02:51:04.215762  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) setting executable bit set on /home/jenkins/minikube-integration/20316-897624/.minikube/machines/kubernetes-upgrade-080871 (perms=drwx------)
	I0127 02:51:04.215783  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) setting executable bit set on /home/jenkins/minikube-integration/20316-897624/.minikube/machines (perms=drwxr-xr-x)
	I0127 02:51:04.215795  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) setting executable bit set on /home/jenkins/minikube-integration/20316-897624/.minikube (perms=drwxr-xr-x)
	I0127 02:51:04.215838  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) setting executable bit set on /home/jenkins/minikube-integration/20316-897624 (perms=drwxrwxr-x)
	I0127 02:51:04.215861  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0127 02:51:04.215872  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20316-897624/.minikube
	I0127 02:51:04.215886  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20316-897624
	I0127 02:51:04.215895  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0127 02:51:04.215901  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | checking permissions on dir: /home/jenkins
	I0127 02:51:04.215912  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | checking permissions on dir: /home
	I0127 02:51:04.215920  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | skipping /home - not owner
	I0127 02:51:04.215928  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0127 02:51:04.215937  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) creating domain...
	I0127 02:51:04.216771  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) define libvirt domain using xml: 
	I0127 02:51:04.216798  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) <domain type='kvm'>
	I0127 02:51:04.216811  940744 main.go:141] libmachine: (kubernetes-upgrade-080871)   <name>kubernetes-upgrade-080871</name>
	I0127 02:51:04.216819  940744 main.go:141] libmachine: (kubernetes-upgrade-080871)   <memory unit='MiB'>2200</memory>
	I0127 02:51:04.216851  940744 main.go:141] libmachine: (kubernetes-upgrade-080871)   <vcpu>2</vcpu>
	I0127 02:51:04.216874  940744 main.go:141] libmachine: (kubernetes-upgrade-080871)   <features>
	I0127 02:51:04.216885  940744 main.go:141] libmachine: (kubernetes-upgrade-080871)     <acpi/>
	I0127 02:51:04.216912  940744 main.go:141] libmachine: (kubernetes-upgrade-080871)     <apic/>
	I0127 02:51:04.216934  940744 main.go:141] libmachine: (kubernetes-upgrade-080871)     <pae/>
	I0127 02:51:04.216942  940744 main.go:141] libmachine: (kubernetes-upgrade-080871)     
	I0127 02:51:04.216966  940744 main.go:141] libmachine: (kubernetes-upgrade-080871)   </features>
	I0127 02:51:04.216983  940744 main.go:141] libmachine: (kubernetes-upgrade-080871)   <cpu mode='host-passthrough'>
	I0127 02:51:04.216992  940744 main.go:141] libmachine: (kubernetes-upgrade-080871)   
	I0127 02:51:04.216997  940744 main.go:141] libmachine: (kubernetes-upgrade-080871)   </cpu>
	I0127 02:51:04.217004  940744 main.go:141] libmachine: (kubernetes-upgrade-080871)   <os>
	I0127 02:51:04.217009  940744 main.go:141] libmachine: (kubernetes-upgrade-080871)     <type>hvm</type>
	I0127 02:51:04.217016  940744 main.go:141] libmachine: (kubernetes-upgrade-080871)     <boot dev='cdrom'/>
	I0127 02:51:04.217023  940744 main.go:141] libmachine: (kubernetes-upgrade-080871)     <boot dev='hd'/>
	I0127 02:51:04.217029  940744 main.go:141] libmachine: (kubernetes-upgrade-080871)     <bootmenu enable='no'/>
	I0127 02:51:04.217038  940744 main.go:141] libmachine: (kubernetes-upgrade-080871)   </os>
	I0127 02:51:04.217043  940744 main.go:141] libmachine: (kubernetes-upgrade-080871)   <devices>
	I0127 02:51:04.217049  940744 main.go:141] libmachine: (kubernetes-upgrade-080871)     <disk type='file' device='cdrom'>
	I0127 02:51:04.217058  940744 main.go:141] libmachine: (kubernetes-upgrade-080871)       <source file='/home/jenkins/minikube-integration/20316-897624/.minikube/machines/kubernetes-upgrade-080871/boot2docker.iso'/>
	I0127 02:51:04.217065  940744 main.go:141] libmachine: (kubernetes-upgrade-080871)       <target dev='hdc' bus='scsi'/>
	I0127 02:51:04.217070  940744 main.go:141] libmachine: (kubernetes-upgrade-080871)       <readonly/>
	I0127 02:51:04.217080  940744 main.go:141] libmachine: (kubernetes-upgrade-080871)     </disk>
	I0127 02:51:04.217088  940744 main.go:141] libmachine: (kubernetes-upgrade-080871)     <disk type='file' device='disk'>
	I0127 02:51:04.217093  940744 main.go:141] libmachine: (kubernetes-upgrade-080871)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0127 02:51:04.217103  940744 main.go:141] libmachine: (kubernetes-upgrade-080871)       <source file='/home/jenkins/minikube-integration/20316-897624/.minikube/machines/kubernetes-upgrade-080871/kubernetes-upgrade-080871.rawdisk'/>
	I0127 02:51:04.217114  940744 main.go:141] libmachine: (kubernetes-upgrade-080871)       <target dev='hda' bus='virtio'/>
	I0127 02:51:04.217119  940744 main.go:141] libmachine: (kubernetes-upgrade-080871)     </disk>
	I0127 02:51:04.217127  940744 main.go:141] libmachine: (kubernetes-upgrade-080871)     <interface type='network'>
	I0127 02:51:04.217132  940744 main.go:141] libmachine: (kubernetes-upgrade-080871)       <source network='mk-kubernetes-upgrade-080871'/>
	I0127 02:51:04.217137  940744 main.go:141] libmachine: (kubernetes-upgrade-080871)       <model type='virtio'/>
	I0127 02:51:04.217142  940744 main.go:141] libmachine: (kubernetes-upgrade-080871)     </interface>
	I0127 02:51:04.217149  940744 main.go:141] libmachine: (kubernetes-upgrade-080871)     <interface type='network'>
	I0127 02:51:04.217154  940744 main.go:141] libmachine: (kubernetes-upgrade-080871)       <source network='default'/>
	I0127 02:51:04.217159  940744 main.go:141] libmachine: (kubernetes-upgrade-080871)       <model type='virtio'/>
	I0127 02:51:04.217164  940744 main.go:141] libmachine: (kubernetes-upgrade-080871)     </interface>
	I0127 02:51:04.217169  940744 main.go:141] libmachine: (kubernetes-upgrade-080871)     <serial type='pty'>
	I0127 02:51:04.217181  940744 main.go:141] libmachine: (kubernetes-upgrade-080871)       <target port='0'/>
	I0127 02:51:04.217190  940744 main.go:141] libmachine: (kubernetes-upgrade-080871)     </serial>
	I0127 02:51:04.217196  940744 main.go:141] libmachine: (kubernetes-upgrade-080871)     <console type='pty'>
	I0127 02:51:04.217206  940744 main.go:141] libmachine: (kubernetes-upgrade-080871)       <target type='serial' port='0'/>
	I0127 02:51:04.217215  940744 main.go:141] libmachine: (kubernetes-upgrade-080871)     </console>
	I0127 02:51:04.217230  940744 main.go:141] libmachine: (kubernetes-upgrade-080871)     <rng model='virtio'>
	I0127 02:51:04.217243  940744 main.go:141] libmachine: (kubernetes-upgrade-080871)       <backend model='random'>/dev/random</backend>
	I0127 02:51:04.217253  940744 main.go:141] libmachine: (kubernetes-upgrade-080871)     </rng>
	I0127 02:51:04.217258  940744 main.go:141] libmachine: (kubernetes-upgrade-080871)     
	I0127 02:51:04.217267  940744 main.go:141] libmachine: (kubernetes-upgrade-080871)     
	I0127 02:51:04.217272  940744 main.go:141] libmachine: (kubernetes-upgrade-080871)   </devices>
	I0127 02:51:04.217281  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) </domain>
	I0127 02:51:04.217289  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) 
	I0127 02:51:04.222158  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined MAC address 52:54:00:eb:01:81 in network default
	I0127 02:51:04.222639  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) starting domain...
	I0127 02:51:04.222676  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 02:51:04.222687  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) ensuring networks are active...
	I0127 02:51:04.223382  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Ensuring network default is active
	I0127 02:51:04.223657  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Ensuring network mk-kubernetes-upgrade-080871 is active
	I0127 02:51:04.224184  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) getting domain XML...
	I0127 02:51:04.224861  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) creating domain...
	I0127 02:51:05.852331  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) waiting for IP...
	I0127 02:51:05.853206  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 02:51:05.853607  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | unable to find current IP address of domain kubernetes-upgrade-080871 in network mk-kubernetes-upgrade-080871
	I0127 02:51:05.853661  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | I0127 02:51:05.853602  940934 retry.go:31] will retry after 246.677252ms: waiting for domain to come up
	I0127 02:51:06.102250  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 02:51:06.102824  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | unable to find current IP address of domain kubernetes-upgrade-080871 in network mk-kubernetes-upgrade-080871
	I0127 02:51:06.102855  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | I0127 02:51:06.102759  940934 retry.go:31] will retry after 327.635039ms: waiting for domain to come up
	I0127 02:51:06.432455  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 02:51:06.433013  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | unable to find current IP address of domain kubernetes-upgrade-080871 in network mk-kubernetes-upgrade-080871
	I0127 02:51:06.433046  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | I0127 02:51:06.432972  940934 retry.go:31] will retry after 400.834993ms: waiting for domain to come up
	I0127 02:51:06.835497  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 02:51:06.836160  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | unable to find current IP address of domain kubernetes-upgrade-080871 in network mk-kubernetes-upgrade-080871
	I0127 02:51:06.836194  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | I0127 02:51:06.836071  940934 retry.go:31] will retry after 390.297416ms: waiting for domain to come up
	I0127 02:51:07.228003  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 02:51:07.228555  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | unable to find current IP address of domain kubernetes-upgrade-080871 in network mk-kubernetes-upgrade-080871
	I0127 02:51:07.228581  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | I0127 02:51:07.228515  940934 retry.go:31] will retry after 578.296622ms: waiting for domain to come up
	I0127 02:51:07.808358  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 02:51:07.808769  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | unable to find current IP address of domain kubernetes-upgrade-080871 in network mk-kubernetes-upgrade-080871
	I0127 02:51:07.808843  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | I0127 02:51:07.808757  940934 retry.go:31] will retry after 598.708899ms: waiting for domain to come up
	I0127 02:51:08.409333  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 02:51:08.410100  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | unable to find current IP address of domain kubernetes-upgrade-080871 in network mk-kubernetes-upgrade-080871
	I0127 02:51:08.410128  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | I0127 02:51:08.409975  940934 retry.go:31] will retry after 1.021186576s: waiting for domain to come up
	I0127 02:51:09.432691  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 02:51:09.433255  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | unable to find current IP address of domain kubernetes-upgrade-080871 in network mk-kubernetes-upgrade-080871
	I0127 02:51:09.433294  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | I0127 02:51:09.433219  940934 retry.go:31] will retry after 1.282861669s: waiting for domain to come up
	I0127 02:51:10.717644  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 02:51:10.718190  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | unable to find current IP address of domain kubernetes-upgrade-080871 in network mk-kubernetes-upgrade-080871
	I0127 02:51:10.718218  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | I0127 02:51:10.718151  940934 retry.go:31] will retry after 1.593967771s: waiting for domain to come up
	I0127 02:51:12.313711  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 02:51:12.314158  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | unable to find current IP address of domain kubernetes-upgrade-080871 in network mk-kubernetes-upgrade-080871
	I0127 02:51:12.314204  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | I0127 02:51:12.314139  940934 retry.go:31] will retry after 2.169176344s: waiting for domain to come up
	I0127 02:51:14.485790  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 02:51:14.486358  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | unable to find current IP address of domain kubernetes-upgrade-080871 in network mk-kubernetes-upgrade-080871
	I0127 02:51:14.486405  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | I0127 02:51:14.486368  940934 retry.go:31] will retry after 1.813023384s: waiting for domain to come up
	I0127 02:51:16.301296  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 02:51:16.301752  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | unable to find current IP address of domain kubernetes-upgrade-080871 in network mk-kubernetes-upgrade-080871
	I0127 02:51:16.301783  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | I0127 02:51:16.301710  940934 retry.go:31] will retry after 2.522634542s: waiting for domain to come up
	I0127 02:51:18.826331  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 02:51:18.826919  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | unable to find current IP address of domain kubernetes-upgrade-080871 in network mk-kubernetes-upgrade-080871
	I0127 02:51:18.826955  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | I0127 02:51:18.826864  940934 retry.go:31] will retry after 4.504370621s: waiting for domain to come up
	I0127 02:51:23.332978  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 02:51:23.333716  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | unable to find current IP address of domain kubernetes-upgrade-080871 in network mk-kubernetes-upgrade-080871
	I0127 02:51:23.333751  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | I0127 02:51:23.333667  940934 retry.go:31] will retry after 5.322701323s: waiting for domain to come up
	I0127 02:51:28.658343  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 02:51:28.658842  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) found domain IP: 192.168.50.96
	I0127 02:51:28.658876  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has current primary IP address 192.168.50.96 and MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 02:51:28.658892  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) reserving static IP address...
	I0127 02:51:28.659263  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-080871", mac: "52:54:00:ea:19:7f", ip: "192.168.50.96"} in network mk-kubernetes-upgrade-080871
	I0127 02:51:28.741750  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) reserved static IP address 192.168.50.96 for domain kubernetes-upgrade-080871
	I0127 02:51:28.741786  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) waiting for SSH...
	I0127 02:51:28.741803  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | Getting to WaitForSSH function...
	I0127 02:51:28.744616  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 02:51:28.745022  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:19:7f", ip: ""} in network mk-kubernetes-upgrade-080871: {Iface:virbr2 ExpiryTime:2025-01-27 03:51:19 +0000 UTC Type:0 Mac:52:54:00:ea:19:7f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ea:19:7f}
	I0127 02:51:28.745060  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined IP address 192.168.50.96 and MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 02:51:28.745180  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | Using SSH client type: external
	I0127 02:51:28.745208  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | Using SSH private key: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/kubernetes-upgrade-080871/id_rsa (-rw-------)
	I0127 02:51:28.745262  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.96 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20316-897624/.minikube/machines/kubernetes-upgrade-080871/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 02:51:28.745279  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | About to run SSH command:
	I0127 02:51:28.745299  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | exit 0
	I0127 02:51:28.873092  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | SSH cmd err, output: <nil>: 
	I0127 02:51:28.873309  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) KVM machine creation complete
	I0127 02:51:28.873708  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetConfigRaw
	I0127 02:51:28.874466  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .DriverName
	I0127 02:51:28.874701  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .DriverName
	I0127 02:51:28.874889  940744 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0127 02:51:28.874907  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetState
	I0127 02:51:28.876432  940744 main.go:141] libmachine: Detecting operating system of created instance...
	I0127 02:51:28.876449  940744 main.go:141] libmachine: Waiting for SSH to be available...
	I0127 02:51:28.876456  940744 main.go:141] libmachine: Getting to WaitForSSH function...
	I0127 02:51:28.876465  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHHostname
	I0127 02:51:28.878879  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 02:51:28.879293  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:19:7f", ip: ""} in network mk-kubernetes-upgrade-080871: {Iface:virbr2 ExpiryTime:2025-01-27 03:51:19 +0000 UTC Type:0 Mac:52:54:00:ea:19:7f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:kubernetes-upgrade-080871 Clientid:01:52:54:00:ea:19:7f}
	I0127 02:51:28.879316  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined IP address 192.168.50.96 and MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 02:51:28.879469  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHPort
	I0127 02:51:28.879666  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHKeyPath
	I0127 02:51:28.879833  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHKeyPath
	I0127 02:51:28.880000  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHUsername
	I0127 02:51:28.880225  940744 main.go:141] libmachine: Using SSH client type: native
	I0127 02:51:28.880432  940744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I0127 02:51:28.880443  940744 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0127 02:51:28.980211  940744 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 02:51:28.980246  940744 main.go:141] libmachine: Detecting the provisioner...
	I0127 02:51:28.980259  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHHostname
	I0127 02:51:28.982891  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 02:51:28.983203  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:19:7f", ip: ""} in network mk-kubernetes-upgrade-080871: {Iface:virbr2 ExpiryTime:2025-01-27 03:51:19 +0000 UTC Type:0 Mac:52:54:00:ea:19:7f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:kubernetes-upgrade-080871 Clientid:01:52:54:00:ea:19:7f}
	I0127 02:51:28.983236  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined IP address 192.168.50.96 and MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 02:51:28.983362  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHPort
	I0127 02:51:28.983652  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHKeyPath
	I0127 02:51:28.983875  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHKeyPath
	I0127 02:51:28.984075  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHUsername
	I0127 02:51:28.984266  940744 main.go:141] libmachine: Using SSH client type: native
	I0127 02:51:28.984495  940744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I0127 02:51:28.984510  940744 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0127 02:51:29.085812  940744 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0127 02:51:29.085892  940744 main.go:141] libmachine: found compatible host: buildroot
	I0127 02:51:29.085903  940744 main.go:141] libmachine: Provisioning with buildroot...
	I0127 02:51:29.085915  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetMachineName
	I0127 02:51:29.086183  940744 buildroot.go:166] provisioning hostname "kubernetes-upgrade-080871"
	I0127 02:51:29.086215  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetMachineName
	I0127 02:51:29.086422  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHHostname
	I0127 02:51:29.089538  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 02:51:29.089939  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:19:7f", ip: ""} in network mk-kubernetes-upgrade-080871: {Iface:virbr2 ExpiryTime:2025-01-27 03:51:19 +0000 UTC Type:0 Mac:52:54:00:ea:19:7f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:kubernetes-upgrade-080871 Clientid:01:52:54:00:ea:19:7f}
	I0127 02:51:29.089956  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined IP address 192.168.50.96 and MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 02:51:29.090129  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHPort
	I0127 02:51:29.090319  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHKeyPath
	I0127 02:51:29.090504  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHKeyPath
	I0127 02:51:29.090649  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHUsername
	I0127 02:51:29.090809  940744 main.go:141] libmachine: Using SSH client type: native
	I0127 02:51:29.091051  940744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I0127 02:51:29.091070  940744 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-080871 && echo "kubernetes-upgrade-080871" | sudo tee /etc/hostname
	I0127 02:51:29.208116  940744 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-080871
	
	I0127 02:51:29.208148  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHHostname
	I0127 02:51:29.211353  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 02:51:29.211688  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:19:7f", ip: ""} in network mk-kubernetes-upgrade-080871: {Iface:virbr2 ExpiryTime:2025-01-27 03:51:19 +0000 UTC Type:0 Mac:52:54:00:ea:19:7f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:kubernetes-upgrade-080871 Clientid:01:52:54:00:ea:19:7f}
	I0127 02:51:29.211735  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined IP address 192.168.50.96 and MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 02:51:29.211920  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHPort
	I0127 02:51:29.212143  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHKeyPath
	I0127 02:51:29.212324  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHKeyPath
	I0127 02:51:29.212486  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHUsername
	I0127 02:51:29.212659  940744 main.go:141] libmachine: Using SSH client type: native
	I0127 02:51:29.212846  940744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I0127 02:51:29.212866  940744 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-080871' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-080871/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-080871' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 02:51:29.325883  940744 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 02:51:29.325918  940744 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20316-897624/.minikube CaCertPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20316-897624/.minikube}
	I0127 02:51:29.325964  940744 buildroot.go:174] setting up certificates
	I0127 02:51:29.325982  940744 provision.go:84] configureAuth start
	I0127 02:51:29.326000  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetMachineName
	I0127 02:51:29.326359  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetIP
	I0127 02:51:29.329275  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 02:51:29.329647  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:19:7f", ip: ""} in network mk-kubernetes-upgrade-080871: {Iface:virbr2 ExpiryTime:2025-01-27 03:51:19 +0000 UTC Type:0 Mac:52:54:00:ea:19:7f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:kubernetes-upgrade-080871 Clientid:01:52:54:00:ea:19:7f}
	I0127 02:51:29.329678  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined IP address 192.168.50.96 and MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 02:51:29.329888  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHHostname
	I0127 02:51:29.332247  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 02:51:29.332646  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:19:7f", ip: ""} in network mk-kubernetes-upgrade-080871: {Iface:virbr2 ExpiryTime:2025-01-27 03:51:19 +0000 UTC Type:0 Mac:52:54:00:ea:19:7f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:kubernetes-upgrade-080871 Clientid:01:52:54:00:ea:19:7f}
	I0127 02:51:29.332677  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined IP address 192.168.50.96 and MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 02:51:29.332826  940744 provision.go:143] copyHostCerts
	I0127 02:51:29.332932  940744 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-897624/.minikube/ca.pem, removing ...
	I0127 02:51:29.332954  940744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-897624/.minikube/ca.pem
	I0127 02:51:29.333028  940744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20316-897624/.minikube/ca.pem (1078 bytes)
	I0127 02:51:29.333170  940744 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-897624/.minikube/cert.pem, removing ...
	I0127 02:51:29.333183  940744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-897624/.minikube/cert.pem
	I0127 02:51:29.333214  940744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20316-897624/.minikube/cert.pem (1123 bytes)
	I0127 02:51:29.333318  940744 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-897624/.minikube/key.pem, removing ...
	I0127 02:51:29.333330  940744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-897624/.minikube/key.pem
	I0127 02:51:29.333359  940744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20316-897624/.minikube/key.pem (1679 bytes)
	I0127 02:51:29.333423  940744 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-080871 san=[127.0.0.1 192.168.50.96 kubernetes-upgrade-080871 localhost minikube]
	I0127 02:51:29.706162  940744 provision.go:177] copyRemoteCerts
	I0127 02:51:29.706242  940744 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 02:51:29.706275  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHHostname
	I0127 02:51:29.709156  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 02:51:29.709536  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:19:7f", ip: ""} in network mk-kubernetes-upgrade-080871: {Iface:virbr2 ExpiryTime:2025-01-27 03:51:19 +0000 UTC Type:0 Mac:52:54:00:ea:19:7f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:kubernetes-upgrade-080871 Clientid:01:52:54:00:ea:19:7f}
	I0127 02:51:29.709571  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined IP address 192.168.50.96 and MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 02:51:29.709721  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHPort
	I0127 02:51:29.709969  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHKeyPath
	I0127 02:51:29.710110  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHUsername
	I0127 02:51:29.710264  940744 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/kubernetes-upgrade-080871/id_rsa Username:docker}
	I0127 02:51:29.799373  940744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 02:51:29.827893  940744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0127 02:51:29.856501  940744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 02:51:29.881326  940744 provision.go:87] duration metric: took 555.326578ms to configureAuth
	I0127 02:51:29.881358  940744 buildroot.go:189] setting minikube options for container-runtime
	I0127 02:51:29.881515  940744 config.go:182] Loaded profile config "kubernetes-upgrade-080871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 02:51:29.881583  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHHostname
	I0127 02:51:29.884439  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 02:51:29.884731  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:19:7f", ip: ""} in network mk-kubernetes-upgrade-080871: {Iface:virbr2 ExpiryTime:2025-01-27 03:51:19 +0000 UTC Type:0 Mac:52:54:00:ea:19:7f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:kubernetes-upgrade-080871 Clientid:01:52:54:00:ea:19:7f}
	I0127 02:51:29.884766  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined IP address 192.168.50.96 and MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 02:51:29.884998  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHPort
	I0127 02:51:29.885202  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHKeyPath
	I0127 02:51:29.885360  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHKeyPath
	I0127 02:51:29.885482  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHUsername
	I0127 02:51:29.885602  940744 main.go:141] libmachine: Using SSH client type: native
	I0127 02:51:29.885797  940744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I0127 02:51:29.885819  940744 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 02:51:30.111374  940744 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 02:51:30.111403  940744 main.go:141] libmachine: Checking connection to Docker...
	I0127 02:51:30.111416  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetURL
	I0127 02:51:30.112796  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | using libvirt version 6000000
	I0127 02:51:30.115345  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 02:51:30.115710  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:19:7f", ip: ""} in network mk-kubernetes-upgrade-080871: {Iface:virbr2 ExpiryTime:2025-01-27 03:51:19 +0000 UTC Type:0 Mac:52:54:00:ea:19:7f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:kubernetes-upgrade-080871 Clientid:01:52:54:00:ea:19:7f}
	I0127 02:51:30.115735  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined IP address 192.168.50.96 and MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 02:51:30.115951  940744 main.go:141] libmachine: Docker is up and running!
	I0127 02:51:30.115971  940744 main.go:141] libmachine: Reticulating splines...
	I0127 02:51:30.115978  940744 client.go:171] duration metric: took 27.079984908s to LocalClient.Create
	I0127 02:51:30.116000  940744 start.go:167] duration metric: took 27.080055948s to libmachine.API.Create "kubernetes-upgrade-080871"
	I0127 02:51:30.116012  940744 start.go:293] postStartSetup for "kubernetes-upgrade-080871" (driver="kvm2")
	I0127 02:51:30.116021  940744 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 02:51:30.116048  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .DriverName
	I0127 02:51:30.116312  940744 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 02:51:30.116344  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHHostname
	I0127 02:51:30.118914  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 02:51:30.119213  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:19:7f", ip: ""} in network mk-kubernetes-upgrade-080871: {Iface:virbr2 ExpiryTime:2025-01-27 03:51:19 +0000 UTC Type:0 Mac:52:54:00:ea:19:7f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:kubernetes-upgrade-080871 Clientid:01:52:54:00:ea:19:7f}
	I0127 02:51:30.119246  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined IP address 192.168.50.96 and MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 02:51:30.119419  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHPort
	I0127 02:51:30.119600  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHKeyPath
	I0127 02:51:30.119761  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHUsername
	I0127 02:51:30.119902  940744 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/kubernetes-upgrade-080871/id_rsa Username:docker}
	I0127 02:51:30.202645  940744 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 02:51:30.207143  940744 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 02:51:30.207170  940744 filesync.go:126] Scanning /home/jenkins/minikube-integration/20316-897624/.minikube/addons for local assets ...
	I0127 02:51:30.207229  940744 filesync.go:126] Scanning /home/jenkins/minikube-integration/20316-897624/.minikube/files for local assets ...
	I0127 02:51:30.207313  940744 filesync.go:149] local asset: /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem -> 9048892.pem in /etc/ssl/certs
	I0127 02:51:30.207429  940744 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 02:51:30.218209  940744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem --> /etc/ssl/certs/9048892.pem (1708 bytes)
	I0127 02:51:30.245344  940744 start.go:296] duration metric: took 129.321079ms for postStartSetup
	I0127 02:51:30.245396  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetConfigRaw
	I0127 02:51:30.246039  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetIP
	I0127 02:51:30.249044  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 02:51:30.249465  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:19:7f", ip: ""} in network mk-kubernetes-upgrade-080871: {Iface:virbr2 ExpiryTime:2025-01-27 03:51:19 +0000 UTC Type:0 Mac:52:54:00:ea:19:7f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:kubernetes-upgrade-080871 Clientid:01:52:54:00:ea:19:7f}
	I0127 02:51:30.249502  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined IP address 192.168.50.96 and MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 02:51:30.249739  940744 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/kubernetes-upgrade-080871/config.json ...
	I0127 02:51:30.249944  940744 start.go:128] duration metric: took 27.236156763s to createHost
	I0127 02:51:30.249971  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHHostname
	I0127 02:51:30.252564  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 02:51:30.252953  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:19:7f", ip: ""} in network mk-kubernetes-upgrade-080871: {Iface:virbr2 ExpiryTime:2025-01-27 03:51:19 +0000 UTC Type:0 Mac:52:54:00:ea:19:7f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:kubernetes-upgrade-080871 Clientid:01:52:54:00:ea:19:7f}
	I0127 02:51:30.252988  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined IP address 192.168.50.96 and MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 02:51:30.253173  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHPort
	I0127 02:51:30.253391  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHKeyPath
	I0127 02:51:30.253561  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHKeyPath
	I0127 02:51:30.253688  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHUsername
	I0127 02:51:30.253873  940744 main.go:141] libmachine: Using SSH client type: native
	I0127 02:51:30.254087  940744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I0127 02:51:30.254104  940744 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 02:51:30.357571  940744 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737946290.341173167
	
	I0127 02:51:30.357597  940744 fix.go:216] guest clock: 1737946290.341173167
	I0127 02:51:30.357608  940744 fix.go:229] Guest: 2025-01-27 02:51:30.341173167 +0000 UTC Remote: 2025-01-27 02:51:30.249958453 +0000 UTC m=+43.013269083 (delta=91.214714ms)
	I0127 02:51:30.357662  940744 fix.go:200] guest clock delta is within tolerance: 91.214714ms
	I0127 02:51:30.357669  940744 start.go:83] releasing machines lock for "kubernetes-upgrade-080871", held for 27.344029934s
	I0127 02:51:30.357711  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .DriverName
	I0127 02:51:30.357993  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetIP
	I0127 02:51:30.360817  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 02:51:30.361420  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:19:7f", ip: ""} in network mk-kubernetes-upgrade-080871: {Iface:virbr2 ExpiryTime:2025-01-27 03:51:19 +0000 UTC Type:0 Mac:52:54:00:ea:19:7f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:kubernetes-upgrade-080871 Clientid:01:52:54:00:ea:19:7f}
	I0127 02:51:30.361453  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined IP address 192.168.50.96 and MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 02:51:30.361834  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .DriverName
	I0127 02:51:30.362439  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .DriverName
	I0127 02:51:30.362623  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .DriverName
	I0127 02:51:30.362716  940744 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 02:51:30.362793  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHHostname
	I0127 02:51:30.362858  940744 ssh_runner.go:195] Run: cat /version.json
	I0127 02:51:30.362881  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHHostname
	I0127 02:51:30.365612  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 02:51:30.365930  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 02:51:30.366018  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:19:7f", ip: ""} in network mk-kubernetes-upgrade-080871: {Iface:virbr2 ExpiryTime:2025-01-27 03:51:19 +0000 UTC Type:0 Mac:52:54:00:ea:19:7f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:kubernetes-upgrade-080871 Clientid:01:52:54:00:ea:19:7f}
	I0127 02:51:30.366048  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined IP address 192.168.50.96 and MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 02:51:30.366209  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHPort
	I0127 02:51:30.366343  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:19:7f", ip: ""} in network mk-kubernetes-upgrade-080871: {Iface:virbr2 ExpiryTime:2025-01-27 03:51:19 +0000 UTC Type:0 Mac:52:54:00:ea:19:7f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:kubernetes-upgrade-080871 Clientid:01:52:54:00:ea:19:7f}
	I0127 02:51:30.366366  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined IP address 192.168.50.96 and MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 02:51:30.366404  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHKeyPath
	I0127 02:51:30.366489  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHPort
	I0127 02:51:30.366573  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHUsername
	I0127 02:51:30.366641  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHKeyPath
	I0127 02:51:30.366706  940744 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/kubernetes-upgrade-080871/id_rsa Username:docker}
	I0127 02:51:30.366745  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHUsername
	I0127 02:51:30.366848  940744 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/kubernetes-upgrade-080871/id_rsa Username:docker}
	I0127 02:51:30.473678  940744 ssh_runner.go:195] Run: systemctl --version
	I0127 02:51:30.481463  940744 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 02:51:30.651216  940744 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 02:51:30.657234  940744 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 02:51:30.657327  940744 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 02:51:30.675262  940744 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 02:51:30.675289  940744 start.go:495] detecting cgroup driver to use...
	I0127 02:51:30.675397  940744 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 02:51:30.696666  940744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 02:51:30.712634  940744 docker.go:217] disabling cri-docker service (if available) ...
	I0127 02:51:30.712700  940744 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 02:51:30.731481  940744 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 02:51:30.750087  940744 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 02:51:30.871220  940744 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 02:51:31.032071  940744 docker.go:233] disabling docker service ...
	I0127 02:51:31.032149  940744 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 02:51:31.049759  940744 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 02:51:31.066751  940744 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 02:51:31.260151  940744 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 02:51:31.410040  940744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 02:51:31.425393  940744 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 02:51:31.445513  940744 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0127 02:51:31.445587  940744 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 02:51:31.456341  940744 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 02:51:31.456413  940744 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 02:51:31.467277  940744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 02:51:31.481278  940744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 02:51:31.494962  940744 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 02:51:31.508489  940744 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 02:51:31.518747  940744 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 02:51:31.518818  940744 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 02:51:31.532457  940744 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 02:51:31.542602  940744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 02:51:31.706261  940744 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 02:51:31.827653  940744 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 02:51:31.827734  940744 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 02:51:31.832944  940744 start.go:563] Will wait 60s for crictl version
	I0127 02:51:31.833033  940744 ssh_runner.go:195] Run: which crictl
	I0127 02:51:31.837172  940744 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 02:51:31.890370  940744 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 02:51:31.890462  940744 ssh_runner.go:195] Run: crio --version
	I0127 02:51:31.922519  940744 ssh_runner.go:195] Run: crio --version
	I0127 02:51:31.952735  940744 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0127 02:51:31.954001  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetIP
	I0127 02:51:31.957757  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 02:51:31.958164  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:19:7f", ip: ""} in network mk-kubernetes-upgrade-080871: {Iface:virbr2 ExpiryTime:2025-01-27 03:51:19 +0000 UTC Type:0 Mac:52:54:00:ea:19:7f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:kubernetes-upgrade-080871 Clientid:01:52:54:00:ea:19:7f}
	I0127 02:51:31.958198  940744 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined IP address 192.168.50.96 and MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 02:51:31.958435  940744 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0127 02:51:31.962645  940744 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 02:51:31.978568  940744 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-080871 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-080871 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.96 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 02:51:31.978689  940744 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 02:51:31.978736  940744 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 02:51:32.013153  940744 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 02:51:32.013235  940744 ssh_runner.go:195] Run: which lz4
	I0127 02:51:32.017484  940744 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 02:51:32.021423  940744 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 02:51:32.021452  940744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0127 02:51:33.570796  940744 crio.go:462] duration metric: took 1.55334025s to copy over tarball
	I0127 02:51:33.570884  940744 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 02:51:36.480192  940744 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.909240305s)
	I0127 02:51:36.480241  940744 crio.go:469] duration metric: took 2.909412561s to extract the tarball
	I0127 02:51:36.480253  940744 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 02:51:36.524715  940744 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 02:51:36.593030  940744 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 02:51:36.593060  940744 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0127 02:51:36.593163  940744 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 02:51:36.593226  940744 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0127 02:51:36.593244  940744 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0127 02:51:36.593269  940744 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 02:51:36.593287  940744 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 02:51:36.593226  940744 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 02:51:36.593202  940744 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0127 02:51:36.593194  940744 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 02:51:36.594734  940744 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 02:51:36.594751  940744 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 02:51:36.594761  940744 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0127 02:51:36.594807  940744 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0127 02:51:36.594872  940744 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 02:51:36.594906  940744 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 02:51:36.594916  940744 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0127 02:51:36.594986  940744 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 02:51:36.792606  940744 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0127 02:51:36.815603  940744 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0127 02:51:36.816799  940744 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 02:51:36.825396  940744 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0127 02:51:36.830711  940744 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0127 02:51:36.844526  940744 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0127 02:51:36.844600  940744 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0127 02:51:36.857292  940744 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0127 02:51:36.857350  940744 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0127 02:51:36.857400  940744 ssh_runner.go:195] Run: which crictl
	I0127 02:51:36.950350  940744 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0127 02:51:36.950401  940744 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0127 02:51:36.950449  940744 ssh_runner.go:195] Run: which crictl
	I0127 02:51:36.970679  940744 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0127 02:51:36.970729  940744 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 02:51:36.970781  940744 ssh_runner.go:195] Run: which crictl
	I0127 02:51:36.986025  940744 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0127 02:51:36.986082  940744 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 02:51:36.986138  940744 ssh_runner.go:195] Run: which crictl
	I0127 02:51:37.003669  940744 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0127 02:51:37.003718  940744 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 02:51:37.003764  940744 ssh_runner.go:195] Run: which crictl
	I0127 02:51:37.017487  940744 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0127 02:51:37.017542  940744 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 02:51:37.017505  940744 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0127 02:51:37.017578  940744 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 02:51:37.017582  940744 ssh_runner.go:195] Run: which crictl
	I0127 02:51:37.017597  940744 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 02:51:37.017607  940744 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0127 02:51:37.017642  940744 ssh_runner.go:195] Run: which crictl
	I0127 02:51:37.017658  940744 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 02:51:37.017675  940744 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 02:51:37.017677  940744 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 02:51:37.104947  940744 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 02:51:37.105055  940744 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 02:51:37.146712  940744 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 02:51:37.146770  940744 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 02:51:37.146728  940744 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 02:51:37.146831  940744 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 02:51:37.146837  940744 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 02:51:37.208765  940744 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 02:51:37.214108  940744 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 02:51:37.325617  940744 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 02:51:37.325679  940744 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 02:51:37.325739  940744 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 02:51:37.325755  940744 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 02:51:37.325830  940744 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 02:51:37.344753  940744 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 02:51:37.344781  940744 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0127 02:51:37.451803  940744 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0127 02:51:37.451846  940744 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0127 02:51:37.466264  940744 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0127 02:51:37.466295  940744 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0127 02:51:37.466332  940744 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 02:51:37.466354  940744 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0127 02:51:37.498584  940744 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0127 02:51:37.804682  940744 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 02:51:37.945186  940744 cache_images.go:92] duration metric: took 1.352103653s to LoadCachedImages
	W0127 02:51:37.945286  940744 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0127 02:51:37.945304  940744 kubeadm.go:934] updating node { 192.168.50.96 8443 v1.20.0 crio true true} ...
	I0127 02:51:37.945429  940744 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-080871 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.96
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-080871 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 02:51:37.945521  940744 ssh_runner.go:195] Run: crio config
	I0127 02:51:38.013475  940744 cni.go:84] Creating CNI manager for ""
	I0127 02:51:38.013502  940744 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 02:51:38.013515  940744 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 02:51:38.013539  940744 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.96 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-080871 NodeName:kubernetes-upgrade-080871 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.96"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.96 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0127 02:51:38.013695  940744 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.96
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-080871"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.96
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.96"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 02:51:38.013754  940744 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0127 02:51:38.026844  940744 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 02:51:38.026922  940744 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 02:51:38.039584  940744 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0127 02:51:38.059864  940744 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 02:51:38.080565  940744 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0127 02:51:38.098718  940744 ssh_runner.go:195] Run: grep 192.168.50.96	control-plane.minikube.internal$ /etc/hosts
	I0127 02:51:38.102699  940744 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.96	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 02:51:38.114810  940744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 02:51:38.229606  940744 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 02:51:38.247164  940744 certs.go:68] Setting up /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/kubernetes-upgrade-080871 for IP: 192.168.50.96
	I0127 02:51:38.247201  940744 certs.go:194] generating shared ca certs ...
	I0127 02:51:38.247224  940744 certs.go:226] acquiring lock for ca certs: {Name:mk2a0a86c4d196cb3cdcd1c836209954479044e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:51:38.247402  940744 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20316-897624/.minikube/ca.key
	I0127 02:51:38.247486  940744 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20316-897624/.minikube/proxy-client-ca.key
	I0127 02:51:38.247503  940744 certs.go:256] generating profile certs ...
	I0127 02:51:38.247581  940744 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/kubernetes-upgrade-080871/client.key
	I0127 02:51:38.247601  940744 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/kubernetes-upgrade-080871/client.crt with IP's: []
	I0127 02:51:38.382710  940744 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/kubernetes-upgrade-080871/client.crt ...
	I0127 02:51:38.382745  940744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/kubernetes-upgrade-080871/client.crt: {Name:mka3314034312a1518a605e96c9d92c372f14eaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:51:38.382940  940744 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/kubernetes-upgrade-080871/client.key ...
	I0127 02:51:38.382962  940744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/kubernetes-upgrade-080871/client.key: {Name:mk701eb5e8dd401e60943878dba935974b8066e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:51:38.383078  940744 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/kubernetes-upgrade-080871/apiserver.key.b77b639c
	I0127 02:51:38.383104  940744 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/kubernetes-upgrade-080871/apiserver.crt.b77b639c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.96]
	I0127 02:51:38.450836  940744 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/kubernetes-upgrade-080871/apiserver.crt.b77b639c ...
	I0127 02:51:38.450874  940744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/kubernetes-upgrade-080871/apiserver.crt.b77b639c: {Name:mk9d0b30bd19e60b5a2ba722e3f0a9528b5eb56d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:51:38.461591  940744 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/kubernetes-upgrade-080871/apiserver.key.b77b639c ...
	I0127 02:51:38.461638  940744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/kubernetes-upgrade-080871/apiserver.key.b77b639c: {Name:mk60861d0f691da2705c392cad8c6daf07d758f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:51:38.461792  940744 certs.go:381] copying /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/kubernetes-upgrade-080871/apiserver.crt.b77b639c -> /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/kubernetes-upgrade-080871/apiserver.crt
	I0127 02:51:38.461965  940744 certs.go:385] copying /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/kubernetes-upgrade-080871/apiserver.key.b77b639c -> /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/kubernetes-upgrade-080871/apiserver.key
	I0127 02:51:38.462054  940744 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/kubernetes-upgrade-080871/proxy-client.key
	I0127 02:51:38.462083  940744 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/kubernetes-upgrade-080871/proxy-client.crt with IP's: []
	I0127 02:51:38.564509  940744 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/kubernetes-upgrade-080871/proxy-client.crt ...
	I0127 02:51:38.564543  940744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/kubernetes-upgrade-080871/proxy-client.crt: {Name:mk9da258d95a881b5517b2efebc146929c63ecf7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:51:38.580192  940744 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/kubernetes-upgrade-080871/proxy-client.key ...
	I0127 02:51:38.580254  940744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/kubernetes-upgrade-080871/proxy-client.key: {Name:mkb77d6faec3836abdb895cb1a81ca69a08b7705 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:51:38.580555  940744 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/904889.pem (1338 bytes)
	W0127 02:51:38.580606  940744 certs.go:480] ignoring /home/jenkins/minikube-integration/20316-897624/.minikube/certs/904889_empty.pem, impossibly tiny 0 bytes
	I0127 02:51:38.580620  940744 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 02:51:38.580703  940744 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem (1078 bytes)
	I0127 02:51:38.580788  940744 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/cert.pem (1123 bytes)
	I0127 02:51:38.580832  940744 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/key.pem (1679 bytes)
	I0127 02:51:38.580899  940744 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem (1708 bytes)
	I0127 02:51:38.582416  940744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 02:51:38.616555  940744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 02:51:38.643389  940744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 02:51:38.666534  940744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 02:51:38.689070  940744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/kubernetes-upgrade-080871/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0127 02:51:38.713189  940744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/kubernetes-upgrade-080871/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 02:51:38.751120  940744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/kubernetes-upgrade-080871/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 02:51:38.777521  940744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/kubernetes-upgrade-080871/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 02:51:38.803922  940744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/certs/904889.pem --> /usr/share/ca-certificates/904889.pem (1338 bytes)
	I0127 02:51:38.826526  940744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem --> /usr/share/ca-certificates/9048892.pem (1708 bytes)
	I0127 02:51:38.852194  940744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 02:51:38.875793  940744 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 02:51:38.892610  940744 ssh_runner.go:195] Run: openssl version
	I0127 02:51:38.898656  940744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/904889.pem && ln -fs /usr/share/ca-certificates/904889.pem /etc/ssl/certs/904889.pem"
	I0127 02:51:38.910163  940744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/904889.pem
	I0127 02:51:38.914678  940744 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 01:57 /usr/share/ca-certificates/904889.pem
	I0127 02:51:38.914742  940744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/904889.pem
	I0127 02:51:38.921304  940744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/904889.pem /etc/ssl/certs/51391683.0"
	I0127 02:51:38.936116  940744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9048892.pem && ln -fs /usr/share/ca-certificates/9048892.pem /etc/ssl/certs/9048892.pem"
	I0127 02:51:38.951632  940744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9048892.pem
	I0127 02:51:38.957943  940744 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 01:57 /usr/share/ca-certificates/9048892.pem
	I0127 02:51:38.958004  940744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9048892.pem
	I0127 02:51:38.963523  940744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9048892.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 02:51:38.973832  940744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 02:51:38.984296  940744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 02:51:38.988879  940744 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 01:48 /usr/share/ca-certificates/minikubeCA.pem
	I0127 02:51:38.988957  940744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 02:51:38.994388  940744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 02:51:39.004879  940744 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 02:51:39.008740  940744 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 02:51:39.008797  940744 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-080871 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-080871 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.96 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 02:51:39.008873  940744 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 02:51:39.008961  940744 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 02:51:39.065294  940744 cri.go:89] found id: ""
	I0127 02:51:39.065368  940744 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 02:51:39.075594  940744 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 02:51:39.085175  940744 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 02:51:39.094958  940744 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 02:51:39.094977  940744 kubeadm.go:157] found existing configuration files:
	
	I0127 02:51:39.095018  940744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 02:51:39.103843  940744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 02:51:39.103911  940744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 02:51:39.115978  940744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 02:51:39.136654  940744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 02:51:39.136732  940744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 02:51:39.148870  940744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 02:51:39.161962  940744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 02:51:39.162031  940744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 02:51:39.174681  940744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 02:51:39.191924  940744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 02:51:39.191997  940744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 02:51:39.202220  940744 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 02:51:39.353827  940744 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 02:51:39.353921  940744 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 02:51:39.500677  940744 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 02:51:39.500839  940744 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 02:51:39.500990  940744 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 02:51:39.705795  940744 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 02:51:39.707567  940744 out.go:235]   - Generating certificates and keys ...
	I0127 02:51:39.707690  940744 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 02:51:39.707781  940744 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 02:51:39.974984  940744 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 02:51:40.202352  940744 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0127 02:51:40.335635  940744 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0127 02:51:40.401821  940744 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0127 02:51:40.629024  940744 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0127 02:51:40.629259  940744 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-080871 localhost] and IPs [192.168.50.96 127.0.0.1 ::1]
	I0127 02:51:40.743935  940744 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0127 02:51:40.744323  940744 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-080871 localhost] and IPs [192.168.50.96 127.0.0.1 ::1]
	I0127 02:51:41.052538  940744 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 02:51:41.174147  940744 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 02:51:41.228012  940744 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0127 02:51:41.228120  940744 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 02:51:41.544381  940744 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 02:51:41.682407  940744 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 02:51:41.828868  940744 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 02:51:42.086698  940744 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 02:51:42.104031  940744 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 02:51:42.105447  940744 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 02:51:42.105546  940744 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 02:51:42.253179  940744 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 02:51:42.256030  940744 out.go:235]   - Booting up control plane ...
	I0127 02:51:42.256196  940744 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 02:51:42.263959  940744 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 02:51:42.264570  940744 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 02:51:42.274106  940744 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 02:51:42.281170  940744 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 02:52:22.278286  940744 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 02:52:22.278429  940744 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 02:52:22.278730  940744 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 02:52:27.279295  940744 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 02:52:27.279471  940744 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 02:52:37.280162  940744 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 02:52:37.280439  940744 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 02:52:57.281453  940744 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 02:52:57.281730  940744 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 02:53:37.281847  940744 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 02:53:37.282192  940744 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 02:53:37.282221  940744 kubeadm.go:310] 
	I0127 02:53:37.282306  940744 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 02:53:37.282388  940744 kubeadm.go:310] 		timed out waiting for the condition
	I0127 02:53:37.282403  940744 kubeadm.go:310] 
	I0127 02:53:37.282444  940744 kubeadm.go:310] 	This error is likely caused by:
	I0127 02:53:37.282516  940744 kubeadm.go:310] 		- The kubelet is not running
	I0127 02:53:37.282649  940744 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 02:53:37.282657  940744 kubeadm.go:310] 
	I0127 02:53:37.282812  940744 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 02:53:37.282866  940744 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 02:53:37.282916  940744 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 02:53:37.282925  940744 kubeadm.go:310] 
	I0127 02:53:37.283107  940744 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 02:53:37.283242  940744 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 02:53:37.283267  940744 kubeadm.go:310] 
	I0127 02:53:37.283416  940744 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 02:53:37.283553  940744 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 02:53:37.283672  940744 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 02:53:37.283778  940744 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 02:53:37.283799  940744 kubeadm.go:310] 
	I0127 02:53:37.284301  940744 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 02:53:37.284431  940744 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 02:53:37.284518  940744 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0127 02:53:37.284738  940744 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-080871 localhost] and IPs [192.168.50.96 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-080871 localhost] and IPs [192.168.50.96 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-080871 localhost] and IPs [192.168.50.96 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-080871 localhost] and IPs [192.168.50.96 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0127 02:53:37.284793  940744 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 02:53:38.168681  940744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 02:53:38.186564  940744 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 02:53:38.200033  940744 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 02:53:38.200056  940744 kubeadm.go:157] found existing configuration files:
	
	I0127 02:53:38.200106  940744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 02:53:38.213096  940744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 02:53:38.213180  940744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 02:53:38.227632  940744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 02:53:38.239732  940744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 02:53:38.239826  940744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 02:53:38.252387  940744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 02:53:38.264148  940744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 02:53:38.264225  940744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 02:53:38.276587  940744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 02:53:38.286656  940744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 02:53:38.286736  940744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 02:53:38.299739  940744 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 02:53:38.519030  940744 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 02:55:34.828275  940744 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 02:55:34.828398  940744 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0127 02:55:34.830002  940744 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 02:55:34.830058  940744 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 02:55:34.830145  940744 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 02:55:34.830274  940744 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 02:55:34.830405  940744 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 02:55:34.830497  940744 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 02:55:34.833321  940744 out.go:235]   - Generating certificates and keys ...
	I0127 02:55:34.833410  940744 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 02:55:34.833484  940744 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 02:55:34.833551  940744 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 02:55:34.833605  940744 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 02:55:34.833675  940744 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 02:55:34.833742  940744 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 02:55:34.833797  940744 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 02:55:34.833859  940744 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 02:55:34.833938  940744 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 02:55:34.834021  940744 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 02:55:34.834076  940744 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 02:55:34.834144  940744 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 02:55:34.834228  940744 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 02:55:34.834279  940744 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 02:55:34.834367  940744 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 02:55:34.834450  940744 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 02:55:34.834548  940744 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 02:55:34.834666  940744 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 02:55:34.834724  940744 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 02:55:34.834812  940744 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 02:55:34.836183  940744 out.go:235]   - Booting up control plane ...
	I0127 02:55:34.836283  940744 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 02:55:34.836371  940744 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 02:55:34.836458  940744 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 02:55:34.836527  940744 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 02:55:34.836675  940744 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 02:55:34.836743  940744 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 02:55:34.836839  940744 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 02:55:34.837084  940744 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 02:55:34.837144  940744 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 02:55:34.837315  940744 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 02:55:34.837380  940744 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 02:55:34.837602  940744 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 02:55:34.837691  940744 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 02:55:34.837862  940744 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 02:55:34.837927  940744 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 02:55:34.838086  940744 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 02:55:34.838094  940744 kubeadm.go:310] 
	I0127 02:55:34.838128  940744 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 02:55:34.838190  940744 kubeadm.go:310] 		timed out waiting for the condition
	I0127 02:55:34.838212  940744 kubeadm.go:310] 
	I0127 02:55:34.838255  940744 kubeadm.go:310] 	This error is likely caused by:
	I0127 02:55:34.838285  940744 kubeadm.go:310] 		- The kubelet is not running
	I0127 02:55:34.838379  940744 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 02:55:34.838387  940744 kubeadm.go:310] 
	I0127 02:55:34.838476  940744 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 02:55:34.838508  940744 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 02:55:34.838537  940744 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 02:55:34.838544  940744 kubeadm.go:310] 
	I0127 02:55:34.838637  940744 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 02:55:34.838712  940744 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 02:55:34.838718  940744 kubeadm.go:310] 
	I0127 02:55:34.838898  940744 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 02:55:34.839021  940744 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 02:55:34.839125  940744 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 02:55:34.839196  940744 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 02:55:34.839244  940744 kubeadm.go:310] 
	I0127 02:55:34.839279  940744 kubeadm.go:394] duration metric: took 3m55.830485884s to StartCluster
	I0127 02:55:34.839328  940744 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 02:55:34.839386  940744 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 02:55:34.881643  940744 cri.go:89] found id: ""
	I0127 02:55:34.881676  940744 logs.go:282] 0 containers: []
	W0127 02:55:34.881688  940744 logs.go:284] No container was found matching "kube-apiserver"
	I0127 02:55:34.881697  940744 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 02:55:34.881780  940744 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 02:55:34.915451  940744 cri.go:89] found id: ""
	I0127 02:55:34.915486  940744 logs.go:282] 0 containers: []
	W0127 02:55:34.915494  940744 logs.go:284] No container was found matching "etcd"
	I0127 02:55:34.915501  940744 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 02:55:34.915564  940744 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 02:55:34.956545  940744 cri.go:89] found id: ""
	I0127 02:55:34.956579  940744 logs.go:282] 0 containers: []
	W0127 02:55:34.956589  940744 logs.go:284] No container was found matching "coredns"
	I0127 02:55:34.956596  940744 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 02:55:34.956658  940744 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 02:55:34.991717  940744 cri.go:89] found id: ""
	I0127 02:55:34.991747  940744 logs.go:282] 0 containers: []
	W0127 02:55:34.991756  940744 logs.go:284] No container was found matching "kube-scheduler"
	I0127 02:55:34.991762  940744 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 02:55:34.991825  940744 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 02:55:35.025448  940744 cri.go:89] found id: ""
	I0127 02:55:35.025489  940744 logs.go:282] 0 containers: []
	W0127 02:55:35.025502  940744 logs.go:284] No container was found matching "kube-proxy"
	I0127 02:55:35.025510  940744 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 02:55:35.025576  940744 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 02:55:35.058977  940744 cri.go:89] found id: ""
	I0127 02:55:35.059017  940744 logs.go:282] 0 containers: []
	W0127 02:55:35.059027  940744 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 02:55:35.059033  940744 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 02:55:35.059112  940744 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 02:55:35.093002  940744 cri.go:89] found id: ""
	I0127 02:55:35.093035  940744 logs.go:282] 0 containers: []
	W0127 02:55:35.093043  940744 logs.go:284] No container was found matching "kindnet"
	I0127 02:55:35.093055  940744 logs.go:123] Gathering logs for describe nodes ...
	I0127 02:55:35.093075  940744 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 02:55:35.219448  940744 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 02:55:35.219479  940744 logs.go:123] Gathering logs for CRI-O ...
	I0127 02:55:35.219498  940744 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 02:55:35.326143  940744 logs.go:123] Gathering logs for container status ...
	I0127 02:55:35.326188  940744 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 02:55:35.365408  940744 logs.go:123] Gathering logs for kubelet ...
	I0127 02:55:35.365442  940744 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 02:55:35.417762  940744 logs.go:123] Gathering logs for dmesg ...
	I0127 02:55:35.417808  940744 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0127 02:55:35.431781  940744 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0127 02:55:35.431884  940744 out.go:270] * 
	* 
	W0127 02:55:35.431960  940744 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 02:55:35.431978  940744 out.go:270] * 
	* 
	W0127 02:55:35.433168  940744 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0127 02:55:35.436274  940744 out.go:201] 
	W0127 02:55:35.437718  940744 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 02:55:35.437768  940744 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0127 02:55:35.437797  940744 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0127 02:55:35.439423  940744 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-080871 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-080871
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-080871: (6.30454534s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-080871 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-080871 status --format={{.Host}}: exit status 7 (75.240309ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-080871 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-080871 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (37.910954733s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-080871 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-080871 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-080871 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (89.391151ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-080871] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20316
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20316-897624/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-897624/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-080871
	    minikube start -p kubernetes-upgrade-080871 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0808712 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.1, by running:
	    
	    minikube start -p kubernetes-upgrade-080871 --kubernetes-version=v1.32.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-080871 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0127 02:56:33.566572  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-080871 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (6m26.299215468s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2025-01-27 03:02:46.253884975 +0000 UTC m=+4513.775641878
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-080871 -n kubernetes-upgrade-080871
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-080871 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-080871 logs -n 25: (1.357028621s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p force-systemd-flag-409157                           | force-systemd-flag-409157 | jenkins | v1.35.0 | 27 Jan 25 02:53 UTC | 27 Jan 25 02:53 UTC |
	| start   | -p old-k8s-version-542356                              | old-k8s-version-542356    | jenkins | v1.35.0 | 27 Jan 25 02:53 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| ssh     | cert-options-919407 ssh                                | cert-options-919407       | jenkins | v1.35.0 | 27 Jan 25 02:53 UTC | 27 Jan 25 02:53 UTC |
	|         | openssl x509 -text -noout -in                          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-919407 -- sudo                         | cert-options-919407       | jenkins | v1.35.0 | 27 Jan 25 02:53 UTC | 27 Jan 25 02:53 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                           |         |         |                     |                     |
	| delete  | -p cert-options-919407                                 | cert-options-919407       | jenkins | v1.35.0 | 27 Jan 25 02:53 UTC | 27 Jan 25 02:53 UTC |
	| start   | -p no-preload-844432                                   | no-preload-844432         | jenkins | v1.35.0 | 27 Jan 25 02:53 UTC | 27 Jan 25 02:55 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-844432             | no-preload-844432         | jenkins | v1.35.0 | 27 Jan 25 02:55 UTC | 27 Jan 25 02:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p no-preload-844432                                   | no-preload-844432         | jenkins | v1.35.0 | 27 Jan 25 02:55 UTC | 27 Jan 25 02:56 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-080871                           | kubernetes-upgrade-080871 | jenkins | v1.35.0 | 27 Jan 25 02:55 UTC | 27 Jan 25 02:55 UTC |
	| start   | -p kubernetes-upgrade-080871                           | kubernetes-upgrade-080871 | jenkins | v1.35.0 | 27 Jan 25 02:55 UTC | 27 Jan 25 02:56 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| start   | -p cert-expiration-591242                              | cert-expiration-591242    | jenkins | v1.35.0 | 27 Jan 25 02:55 UTC | 27 Jan 25 02:57 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                           |         |         |                     |                     |
	|         | --driver=kvm2                                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-080871                           | kubernetes-upgrade-080871 | jenkins | v1.35.0 | 27 Jan 25 02:56 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	|         | --driver=kvm2                                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-080871                           | kubernetes-upgrade-080871 | jenkins | v1.35.0 | 27 Jan 25 02:56 UTC | 27 Jan 25 03:02 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-844432                  | no-preload-844432         | jenkins | v1.35.0 | 27 Jan 25 02:56 UTC | 27 Jan 25 02:56 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p no-preload-844432                                   | no-preload-844432         | jenkins | v1.35.0 | 27 Jan 25 02:56 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-591242                              | cert-expiration-591242    | jenkins | v1.35.0 | 27 Jan 25 02:57 UTC | 27 Jan 25 02:57 UTC |
	| start   | -p embed-certs-896179                                  | embed-certs-896179        | jenkins | v1.35.0 | 27 Jan 25 02:57 UTC | 27 Jan 25 02:59 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-542356        | old-k8s-version-542356    | jenkins | v1.35.0 | 27 Jan 25 02:58 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-896179            | embed-certs-896179        | jenkins | v1.35.0 | 27 Jan 25 02:59 UTC | 27 Jan 25 02:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p embed-certs-896179                                  | embed-certs-896179        | jenkins | v1.35.0 | 27 Jan 25 02:59 UTC | 27 Jan 25 03:01 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| stop    | -p old-k8s-version-542356                              | old-k8s-version-542356    | jenkins | v1.35.0 | 27 Jan 25 02:59 UTC | 27 Jan 25 02:59 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-542356             | old-k8s-version-542356    | jenkins | v1.35.0 | 27 Jan 25 02:59 UTC | 27 Jan 25 02:59 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p old-k8s-version-542356                              | old-k8s-version-542356    | jenkins | v1.35.0 | 27 Jan 25 02:59 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-896179                 | embed-certs-896179        | jenkins | v1.35.0 | 27 Jan 25 03:01 UTC | 27 Jan 25 03:01 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p embed-certs-896179                                  | embed-certs-896179        | jenkins | v1.35.0 | 27 Jan 25 03:01 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                           |         |         |                     |                     |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 03:01:03
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 03:01:03.373428  949037 out.go:345] Setting OutFile to fd 1 ...
	I0127 03:01:03.373529  949037 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 03:01:03.373534  949037 out.go:358] Setting ErrFile to fd 2...
	I0127 03:01:03.373538  949037 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 03:01:03.373722  949037 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-897624/.minikube/bin
	I0127 03:01:03.374276  949037 out.go:352] Setting JSON to false
	I0127 03:01:03.375334  949037 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":13406,"bootTime":1737933457,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 03:01:03.375432  949037 start.go:139] virtualization: kvm guest
	I0127 03:01:03.377645  949037 out.go:177] * [embed-certs-896179] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 03:01:03.378858  949037 notify.go:220] Checking for updates...
	I0127 03:01:03.378870  949037 out.go:177]   - MINIKUBE_LOCATION=20316
	I0127 03:01:03.380126  949037 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 03:01:03.381590  949037 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20316-897624/kubeconfig
	I0127 03:01:03.383029  949037 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-897624/.minikube
	I0127 03:01:03.384432  949037 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 03:01:03.385727  949037 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 03:01:03.387477  949037 config.go:182] Loaded profile config "embed-certs-896179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 03:01:03.388072  949037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:01:03.388131  949037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:01:03.403853  949037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46309
	I0127 03:01:03.404265  949037 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:01:03.404813  949037 main.go:141] libmachine: Using API Version  1
	I0127 03:01:03.404839  949037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:01:03.405223  949037 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:01:03.405451  949037 main.go:141] libmachine: (embed-certs-896179) Calling .DriverName
	I0127 03:01:03.405713  949037 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 03:01:03.406043  949037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:01:03.406105  949037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:01:03.420845  949037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39211
	I0127 03:01:03.421373  949037 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:01:03.421887  949037 main.go:141] libmachine: Using API Version  1
	I0127 03:01:03.421916  949037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:01:03.422272  949037 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:01:03.422488  949037 main.go:141] libmachine: (embed-certs-896179) Calling .DriverName
	I0127 03:01:03.458374  949037 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 03:01:03.459620  949037 start.go:297] selected driver: kvm2
	I0127 03:01:03.459635  949037 start.go:901] validating driver "kvm2" against &{Name:embed-certs-896179 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-896179 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.190 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpira
tion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 03:01:03.459752  949037 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 03:01:03.460713  949037 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 03:01:03.460839  949037 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20316-897624/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 03:01:03.476627  949037 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 03:01:03.477124  949037 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 03:01:03.477163  949037 cni.go:84] Creating CNI manager for ""
	I0127 03:01:03.477244  949037 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 03:01:03.477306  949037 start.go:340] cluster config:
	{Name:embed-certs-896179 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-896179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.190 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 03:01:03.477421  949037 iso.go:125] acquiring lock: {Name:mkbbe08fa3daa2372045069667a0a52e0e34abd9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 03:01:03.479800  949037 out.go:177] * Starting "embed-certs-896179" primary control-plane node in "embed-certs-896179" cluster
	I0127 03:01:03.481016  949037 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 03:01:03.481058  949037 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 03:01:03.481069  949037 cache.go:56] Caching tarball of preloaded images
	I0127 03:01:03.481157  949037 preload.go:172] Found /home/jenkins/minikube-integration/20316-897624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 03:01:03.481167  949037 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 03:01:03.481264  949037 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/embed-certs-896179/config.json ...
	I0127 03:01:03.481451  949037 start.go:360] acquireMachinesLock for embed-certs-896179: {Name:mke71211974c740845fb9b00041df14f4e3ecd74 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 03:01:03.481513  949037 start.go:364] duration metric: took 45.039µs to acquireMachinesLock for "embed-certs-896179"
	I0127 03:01:03.481528  949037 start.go:96] Skipping create...Using existing machine configuration
	I0127 03:01:03.481535  949037 fix.go:54] fixHost starting: 
	I0127 03:01:03.481783  949037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:01:03.481816  949037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:01:03.496774  949037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41501
	I0127 03:01:03.497215  949037 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:01:03.497688  949037 main.go:141] libmachine: Using API Version  1
	I0127 03:01:03.497711  949037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:01:03.498034  949037 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:01:03.498259  949037 main.go:141] libmachine: (embed-certs-896179) Calling .DriverName
	I0127 03:01:03.498467  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetState
	I0127 03:01:03.500066  949037 fix.go:112] recreateIfNeeded on embed-certs-896179: state=Stopped err=<nil>
	I0127 03:01:03.500100  949037 main.go:141] libmachine: (embed-certs-896179) Calling .DriverName
	W0127 03:01:03.500262  949037 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 03:01:03.502677  949037 out.go:177] * Restarting existing kvm2 VM for "embed-certs-896179" ...
	I0127 03:01:00.741110  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:02.748416  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:01.669214  946842 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.068044408s)
	W0127 03:01:01.669260  946842 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I0127 03:01:01.669268  946842 logs.go:123] Gathering logs for kube-scheduler [627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303] ...
	I0127 03:01:01.669282  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303"
	I0127 03:01:01.705166  946842 logs.go:123] Gathering logs for kube-controller-manager [da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e] ...
	I0127 03:01:01.705199  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e"
	I0127 03:01:01.753223  946842 logs.go:123] Gathering logs for storage-provisioner [2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641] ...
	I0127 03:01:01.753261  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641"
	I0127 03:01:01.788749  946842 logs.go:123] Gathering logs for container status ...
	I0127 03:01:01.788787  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:01:01.827062  946842 logs.go:123] Gathering logs for etcd [b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0] ...
	I0127 03:01:01.827103  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0"
	I0127 03:01:01.874042  946842 logs.go:123] Gathering logs for coredns [de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e] ...
	I0127 03:01:01.874087  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e"
	I0127 03:01:01.923918  946842 logs.go:123] Gathering logs for coredns [9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c] ...
	I0127 03:01:01.923964  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c"
	I0127 03:01:01.958772  946842 logs.go:123] Gathering logs for coredns [6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e] ...
	I0127 03:01:01.958807  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e"
	I0127 03:01:01.992506  946842 logs.go:123] Gathering logs for kube-scheduler [a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a] ...
	I0127 03:01:01.992541  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a"
	I0127 03:01:02.060817  946842 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:01:02.060858  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:01:02.407031  946842 logs.go:123] Gathering logs for kubelet ...
	I0127 03:01:02.407076  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:01:02.513776  946842 logs.go:123] Gathering logs for kube-apiserver [ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922] ...
	I0127 03:01:02.513831  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922"
	I0127 03:01:02.551350  946842 logs.go:123] Gathering logs for coredns [3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef] ...
	I0127 03:01:02.551389  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef"
	I0127 03:01:02.592179  946842 logs.go:123] Gathering logs for dmesg ...
	I0127 03:01:02.592221  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:01:02.105793  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:02.606184  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:03.105904  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:03.605445  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:04.106077  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:04.606229  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:05.105417  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:05.606061  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:06.106060  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:06.606049  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:03.503723  949037 main.go:141] libmachine: (embed-certs-896179) Calling .Start
	I0127 03:01:03.503936  949037 main.go:141] libmachine: (embed-certs-896179) starting domain...
	I0127 03:01:03.503979  949037 main.go:141] libmachine: (embed-certs-896179) ensuring networks are active...
	I0127 03:01:03.504669  949037 main.go:141] libmachine: (embed-certs-896179) Ensuring network default is active
	I0127 03:01:03.505051  949037 main.go:141] libmachine: (embed-certs-896179) Ensuring network mk-embed-certs-896179 is active
	I0127 03:01:03.505381  949037 main.go:141] libmachine: (embed-certs-896179) getting domain XML...
	I0127 03:01:03.506266  949037 main.go:141] libmachine: (embed-certs-896179) creating domain...
	I0127 03:01:04.752437  949037 main.go:141] libmachine: (embed-certs-896179) waiting for IP...
	I0127 03:01:04.753358  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has defined MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:04.753745  949037 main.go:141] libmachine: (embed-certs-896179) DBG | unable to find current IP address of domain embed-certs-896179 in network mk-embed-certs-896179
	I0127 03:01:04.753864  949037 main.go:141] libmachine: (embed-certs-896179) DBG | I0127 03:01:04.753731  949072 retry.go:31] will retry after 269.895727ms: waiting for domain to come up
	I0127 03:01:05.025401  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has defined MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:05.025933  949037 main.go:141] libmachine: (embed-certs-896179) DBG | unable to find current IP address of domain embed-certs-896179 in network mk-embed-certs-896179
	I0127 03:01:05.025963  949037 main.go:141] libmachine: (embed-certs-896179) DBG | I0127 03:01:05.025901  949072 retry.go:31] will retry after 244.173916ms: waiting for domain to come up
	I0127 03:01:05.271563  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has defined MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:05.272152  949037 main.go:141] libmachine: (embed-certs-896179) DBG | unable to find current IP address of domain embed-certs-896179 in network mk-embed-certs-896179
	I0127 03:01:05.272184  949037 main.go:141] libmachine: (embed-certs-896179) DBG | I0127 03:01:05.272118  949072 retry.go:31] will retry after 317.086542ms: waiting for domain to come up
	I0127 03:01:05.590541  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has defined MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:05.591024  949037 main.go:141] libmachine: (embed-certs-896179) DBG | unable to find current IP address of domain embed-certs-896179 in network mk-embed-certs-896179
	I0127 03:01:05.591057  949037 main.go:141] libmachine: (embed-certs-896179) DBG | I0127 03:01:05.590975  949072 retry.go:31] will retry after 473.959162ms: waiting for domain to come up
	I0127 03:01:06.066760  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has defined MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:06.067316  949037 main.go:141] libmachine: (embed-certs-896179) DBG | unable to find current IP address of domain embed-certs-896179 in network mk-embed-certs-896179
	I0127 03:01:06.067377  949037 main.go:141] libmachine: (embed-certs-896179) DBG | I0127 03:01:06.067285  949072 retry.go:31] will retry after 727.190625ms: waiting for domain to come up
	I0127 03:01:06.796113  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has defined MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:06.796549  949037 main.go:141] libmachine: (embed-certs-896179) DBG | unable to find current IP address of domain embed-certs-896179 in network mk-embed-certs-896179
	I0127 03:01:06.796608  949037 main.go:141] libmachine: (embed-certs-896179) DBG | I0127 03:01:06.796537  949072 retry.go:31] will retry after 675.609356ms: waiting for domain to come up
	I0127 03:01:07.473361  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has defined MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:07.473897  949037 main.go:141] libmachine: (embed-certs-896179) DBG | unable to find current IP address of domain embed-certs-896179 in network mk-embed-certs-896179
	I0127 03:01:07.473922  949037 main.go:141] libmachine: (embed-certs-896179) DBG | I0127 03:01:07.473871  949072 retry.go:31] will retry after 1.011623345s: waiting for domain to come up
	I0127 03:01:05.241558  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:07.740591  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:09.741513  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:05.112544  946842 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0127 03:01:05.113318  946842 api_server.go:269] stopped: https://192.168.50.96:8443/healthz: Get "https://192.168.50.96:8443/healthz": dial tcp 192.168.50.96:8443: connect: connection refused
	I0127 03:01:05.113385  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:01:05.113451  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:01:05.157229  946842 cri.go:89] found id: "ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922"
	I0127 03:01:05.157262  946842 cri.go:89] found id: ""
	I0127 03:01:05.157273  946842 logs.go:282] 1 containers: [ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922]
	I0127 03:01:05.157340  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:05.168319  946842 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:01:05.168403  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:01:05.203025  946842 cri.go:89] found id: "b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0"
	I0127 03:01:05.203061  946842 cri.go:89] found id: ""
	I0127 03:01:05.203072  946842 logs.go:282] 1 containers: [b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0]
	I0127 03:01:05.203133  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:05.208511  946842 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:01:05.208571  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:01:05.248501  946842 cri.go:89] found id: "3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef"
	I0127 03:01:05.248525  946842 cri.go:89] found id: "de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e"
	I0127 03:01:05.248529  946842 cri.go:89] found id: "9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c"
	I0127 03:01:05.248532  946842 cri.go:89] found id: "6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e"
	I0127 03:01:05.248535  946842 cri.go:89] found id: ""
	I0127 03:01:05.248542  946842 logs.go:282] 4 containers: [3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e 9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c 6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e]
	I0127 03:01:05.248607  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:05.252653  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:05.256600  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:05.260327  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:05.263974  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:01:05.264040  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:01:05.299337  946842 cri.go:89] found id: "a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a"
	I0127 03:01:05.299367  946842 cri.go:89] found id: "627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303"
	I0127 03:01:05.299374  946842 cri.go:89] found id: ""
	I0127 03:01:05.299386  946842 logs.go:282] 2 containers: [a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a 627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303]
	I0127 03:01:05.299509  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:05.303760  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:05.308882  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:01:05.308979  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:01:05.344832  946842 cri.go:89] found id: ""
	I0127 03:01:05.344868  946842 logs.go:282] 0 containers: []
	W0127 03:01:05.344881  946842 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:01:05.344889  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:01:05.344989  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:01:05.382379  946842 cri.go:89] found id: "da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e"
	I0127 03:01:05.382403  946842 cri.go:89] found id: ""
	I0127 03:01:05.382412  946842 logs.go:282] 1 containers: [da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e]
	I0127 03:01:05.382484  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:05.386591  946842 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:01:05.386671  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:01:05.428404  946842 cri.go:89] found id: ""
	I0127 03:01:05.428440  946842 logs.go:282] 0 containers: []
	W0127 03:01:05.428452  946842 logs.go:284] No container was found matching "kindnet"
	I0127 03:01:05.428460  946842 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0127 03:01:05.428516  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0127 03:01:05.468515  946842 cri.go:89] found id: "2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641"
	I0127 03:01:05.468550  946842 cri.go:89] found id: ""
	I0127 03:01:05.468560  946842 logs.go:282] 1 containers: [2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641]
	I0127 03:01:05.468626  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:05.472874  946842 logs.go:123] Gathering logs for coredns [6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e] ...
	I0127 03:01:05.472916  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e"
	I0127 03:01:05.510739  946842 logs.go:123] Gathering logs for container status ...
	I0127 03:01:05.510785  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:01:05.551454  946842 logs.go:123] Gathering logs for coredns [9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c] ...
	I0127 03:01:05.551492  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c"
	I0127 03:01:05.605243  946842 logs.go:123] Gathering logs for kube-scheduler [627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303] ...
	I0127 03:01:05.605285  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303"
	I0127 03:01:05.642926  946842 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:01:05.642966  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:01:06.039374  946842 logs.go:123] Gathering logs for kube-apiserver [ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922] ...
	I0127 03:01:06.039426  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922"
	I0127 03:01:06.082017  946842 logs.go:123] Gathering logs for etcd [b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0] ...
	I0127 03:01:06.082065  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0"
	I0127 03:01:06.125349  946842 logs.go:123] Gathering logs for coredns [de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e] ...
	I0127 03:01:06.125392  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e"
	I0127 03:01:06.168658  946842 logs.go:123] Gathering logs for kube-scheduler [a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a] ...
	I0127 03:01:06.168711  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a"
	I0127 03:01:06.247209  946842 logs.go:123] Gathering logs for kube-controller-manager [da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e] ...
	I0127 03:01:06.247254  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e"
	I0127 03:01:06.304443  946842 logs.go:123] Gathering logs for storage-provisioner [2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641] ...
	I0127 03:01:06.304486  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641"
	I0127 03:01:06.351678  946842 logs.go:123] Gathering logs for kubelet ...
	I0127 03:01:06.351727  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:01:06.460709  946842 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:01:06.460769  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:01:06.539224  946842 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:01:06.539256  946842 logs.go:123] Gathering logs for coredns [3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef] ...
	I0127 03:01:06.539273  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef"
	I0127 03:01:06.598520  946842 logs.go:123] Gathering logs for dmesg ...
	I0127 03:01:06.598556  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:01:09.115270  946842 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0127 03:01:09.116089  946842 api_server.go:269] stopped: https://192.168.50.96:8443/healthz: Get "https://192.168.50.96:8443/healthz": dial tcp 192.168.50.96:8443: connect: connection refused
	I0127 03:01:09.116160  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:01:09.116226  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:01:09.153002  946842 cri.go:89] found id: "ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922"
	I0127 03:01:09.153037  946842 cri.go:89] found id: ""
	I0127 03:01:09.153048  946842 logs.go:282] 1 containers: [ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922]
	I0127 03:01:09.153114  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:09.157315  946842 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:01:09.157396  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:01:09.193853  946842 cri.go:89] found id: "b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0"
	I0127 03:01:09.193886  946842 cri.go:89] found id: ""
	I0127 03:01:09.193898  946842 logs.go:282] 1 containers: [b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0]
	I0127 03:01:09.193966  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:09.197921  946842 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:01:09.197984  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:01:09.233369  946842 cri.go:89] found id: "3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef"
	I0127 03:01:09.233398  946842 cri.go:89] found id: "de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e"
	I0127 03:01:09.233403  946842 cri.go:89] found id: "9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c"
	I0127 03:01:09.233406  946842 cri.go:89] found id: "6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e"
	I0127 03:01:09.233409  946842 cri.go:89] found id: ""
	I0127 03:01:09.233419  946842 logs.go:282] 4 containers: [3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e 9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c 6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e]
	I0127 03:01:09.233495  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:09.237845  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:09.242184  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:09.246131  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:09.249840  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:01:09.249918  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:01:09.287273  946842 cri.go:89] found id: "a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a"
	I0127 03:01:09.287299  946842 cri.go:89] found id: "627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303"
	I0127 03:01:09.287303  946842 cri.go:89] found id: ""
	I0127 03:01:09.287311  946842 logs.go:282] 2 containers: [a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a 627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303]
	I0127 03:01:09.287377  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:09.291517  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:09.295411  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:01:09.295498  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:01:09.330242  946842 cri.go:89] found id: ""
	I0127 03:01:09.330274  946842 logs.go:282] 0 containers: []
	W0127 03:01:09.330285  946842 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:01:09.330294  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:01:09.330360  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:01:09.365038  946842 cri.go:89] found id: "da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e"
	I0127 03:01:09.365069  946842 cri.go:89] found id: ""
	I0127 03:01:09.365079  946842 logs.go:282] 1 containers: [da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e]
	I0127 03:01:09.365146  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:09.369176  946842 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:01:09.369255  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:01:09.408350  946842 cri.go:89] found id: ""
	I0127 03:01:09.408383  946842 logs.go:282] 0 containers: []
	W0127 03:01:09.408390  946842 logs.go:284] No container was found matching "kindnet"
	I0127 03:01:09.408396  946842 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0127 03:01:09.408450  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0127 03:01:09.452033  946842 cri.go:89] found id: "2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641"
	I0127 03:01:09.452070  946842 cri.go:89] found id: ""
	I0127 03:01:09.452082  946842 logs.go:282] 1 containers: [2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641]
	I0127 03:01:09.452161  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:09.459819  946842 logs.go:123] Gathering logs for kube-apiserver [ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922] ...
	I0127 03:01:09.459848  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922"
	I0127 03:01:09.502010  946842 logs.go:123] Gathering logs for kube-scheduler [627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303] ...
	I0127 03:01:09.502060  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303"
	I0127 03:01:09.538463  946842 logs.go:123] Gathering logs for storage-provisioner [2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641] ...
	I0127 03:01:09.538501  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641"
	I0127 03:01:09.571655  946842 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:01:09.571694  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:01:09.907632  946842 logs.go:123] Gathering logs for coredns [3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef] ...
	I0127 03:01:09.907670  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef"
	I0127 03:01:09.956233  946842 logs.go:123] Gathering logs for kubelet ...
	I0127 03:01:09.956284  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:01:07.105309  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:07.605217  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:08.105723  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:08.605621  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:09.106178  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:09.606084  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:10.105183  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:10.606188  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:11.105282  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:11.605971  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:08.486845  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has defined MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:08.487440  949037 main.go:141] libmachine: (embed-certs-896179) DBG | unable to find current IP address of domain embed-certs-896179 in network mk-embed-certs-896179
	I0127 03:01:08.487475  949037 main.go:141] libmachine: (embed-certs-896179) DBG | I0127 03:01:08.487394  949072 retry.go:31] will retry after 1.413453947s: waiting for domain to come up
	I0127 03:01:09.903614  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has defined MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:09.904181  949037 main.go:141] libmachine: (embed-certs-896179) DBG | unable to find current IP address of domain embed-certs-896179 in network mk-embed-certs-896179
	I0127 03:01:09.904215  949037 main.go:141] libmachine: (embed-certs-896179) DBG | I0127 03:01:09.904153  949072 retry.go:31] will retry after 1.696861619s: waiting for domain to come up
	I0127 03:01:11.602361  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has defined MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:11.602714  949037 main.go:141] libmachine: (embed-certs-896179) DBG | unable to find current IP address of domain embed-certs-896179 in network mk-embed-certs-896179
	I0127 03:01:11.602761  949037 main.go:141] libmachine: (embed-certs-896179) DBG | I0127 03:01:11.602711  949072 retry.go:31] will retry after 1.591655655s: waiting for domain to come up
	I0127 03:01:13.196406  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has defined MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:13.197163  949037 main.go:141] libmachine: (embed-certs-896179) DBG | unable to find current IP address of domain embed-certs-896179 in network mk-embed-certs-896179
	I0127 03:01:13.197198  949037 main.go:141] libmachine: (embed-certs-896179) DBG | I0127 03:01:13.197023  949072 retry.go:31] will retry after 2.796280413s: waiting for domain to come up
	I0127 03:01:12.240058  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:14.240396  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:10.057842  946842 logs.go:123] Gathering logs for etcd [b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0] ...
	I0127 03:01:10.057889  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0"
	I0127 03:01:10.102451  946842 logs.go:123] Gathering logs for coredns [de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e] ...
	I0127 03:01:10.102490  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e"
	I0127 03:01:10.152100  946842 logs.go:123] Gathering logs for coredns [9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c] ...
	I0127 03:01:10.152147  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c"
	I0127 03:01:10.195823  946842 logs.go:123] Gathering logs for kube-controller-manager [da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e] ...
	I0127 03:01:10.195859  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e"
	I0127 03:01:10.250025  946842 logs.go:123] Gathering logs for dmesg ...
	I0127 03:01:10.250059  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:01:10.264789  946842 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:01:10.264823  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:01:10.350740  946842 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:01:10.350772  946842 logs.go:123] Gathering logs for coredns [6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e] ...
	I0127 03:01:10.350789  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e"
	I0127 03:01:10.390407  946842 logs.go:123] Gathering logs for kube-scheduler [a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a] ...
	I0127 03:01:10.390439  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a"
	I0127 03:01:10.480209  946842 logs.go:123] Gathering logs for container status ...
	I0127 03:01:10.480253  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:01:13.029213  946842 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0127 03:01:13.030046  946842 api_server.go:269] stopped: https://192.168.50.96:8443/healthz: Get "https://192.168.50.96:8443/healthz": dial tcp 192.168.50.96:8443: connect: connection refused
	I0127 03:01:13.030137  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:01:13.030198  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:01:13.077996  946842 cri.go:89] found id: "ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922"
	I0127 03:01:13.078029  946842 cri.go:89] found id: ""
	I0127 03:01:13.078039  946842 logs.go:282] 1 containers: [ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922]
	I0127 03:01:13.078114  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:13.082607  946842 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:01:13.082683  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:01:13.120726  946842 cri.go:89] found id: "b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0"
	I0127 03:01:13.120757  946842 cri.go:89] found id: ""
	I0127 03:01:13.120767  946842 logs.go:282] 1 containers: [b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0]
	I0127 03:01:13.120829  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:13.125057  946842 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:01:13.125143  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:01:13.164420  946842 cri.go:89] found id: "3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef"
	I0127 03:01:13.164460  946842 cri.go:89] found id: "de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e"
	I0127 03:01:13.164466  946842 cri.go:89] found id: "9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c"
	I0127 03:01:13.164472  946842 cri.go:89] found id: "6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e"
	I0127 03:01:13.164476  946842 cri.go:89] found id: ""
	I0127 03:01:13.164486  946842 logs.go:282] 4 containers: [3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e 9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c 6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e]
	I0127 03:01:13.164558  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:13.170114  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:13.175438  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:13.179477  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:13.183196  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:01:13.183270  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:01:13.231507  946842 cri.go:89] found id: "a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a"
	I0127 03:01:13.231542  946842 cri.go:89] found id: "627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303"
	I0127 03:01:13.231549  946842 cri.go:89] found id: ""
	I0127 03:01:13.231559  946842 logs.go:282] 2 containers: [a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a 627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303]
	I0127 03:01:13.231617  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:13.236308  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:13.240974  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:01:13.241042  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:01:13.275623  946842 cri.go:89] found id: ""
	I0127 03:01:13.275653  946842 logs.go:282] 0 containers: []
	W0127 03:01:13.275664  946842 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:01:13.275672  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:01:13.275741  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:01:13.312544  946842 cri.go:89] found id: "da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e"
	I0127 03:01:13.312577  946842 cri.go:89] found id: ""
	I0127 03:01:13.312589  946842 logs.go:282] 1 containers: [da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e]
	I0127 03:01:13.312665  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:13.316857  946842 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:01:13.316946  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:01:13.357904  946842 cri.go:89] found id: ""
	I0127 03:01:13.357943  946842 logs.go:282] 0 containers: []
	W0127 03:01:13.357954  946842 logs.go:284] No container was found matching "kindnet"
	I0127 03:01:13.357962  946842 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0127 03:01:13.358028  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0127 03:01:13.395848  946842 cri.go:89] found id: "2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641"
	I0127 03:01:13.395875  946842 cri.go:89] found id: ""
	I0127 03:01:13.395888  946842 logs.go:282] 1 containers: [2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641]
	I0127 03:01:13.395953  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:13.400352  946842 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:01:13.400381  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:01:13.488266  946842 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:01:13.488291  946842 logs.go:123] Gathering logs for coredns [3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef] ...
	I0127 03:01:13.488308  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef"
	I0127 03:01:13.540504  946842 logs.go:123] Gathering logs for coredns [9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c] ...
	I0127 03:01:13.540543  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c"
	I0127 03:01:13.584537  946842 logs.go:123] Gathering logs for storage-provisioner [2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641] ...
	I0127 03:01:13.584576  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641"
	I0127 03:01:13.624040  946842 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:01:13.624087  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:01:13.969677  946842 logs.go:123] Gathering logs for kubelet ...
	I0127 03:01:13.969732  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:01:14.078479  946842 logs.go:123] Gathering logs for etcd [b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0] ...
	I0127 03:01:14.078529  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0"
	I0127 03:01:14.135436  946842 logs.go:123] Gathering logs for kube-scheduler [a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a] ...
	I0127 03:01:14.135475  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a"
	I0127 03:01:14.221980  946842 logs.go:123] Gathering logs for kube-controller-manager [da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e] ...
	I0127 03:01:14.222023  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e"
	I0127 03:01:14.279073  946842 logs.go:123] Gathering logs for dmesg ...
	I0127 03:01:14.279115  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:01:14.294661  946842 logs.go:123] Gathering logs for kube-apiserver [ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922] ...
	I0127 03:01:14.294702  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922"
	I0127 03:01:14.335635  946842 logs.go:123] Gathering logs for coredns [de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e] ...
	I0127 03:01:14.335669  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e"
	I0127 03:01:14.383513  946842 logs.go:123] Gathering logs for coredns [6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e] ...
	I0127 03:01:14.383551  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e"
	I0127 03:01:14.419696  946842 logs.go:123] Gathering logs for kube-scheduler [627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303] ...
	I0127 03:01:14.419736  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303"
	I0127 03:01:14.453877  946842 logs.go:123] Gathering logs for container status ...
	I0127 03:01:14.453911  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:01:12.105696  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:12.605432  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:13.106231  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:13.606040  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:14.105980  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:14.605924  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:15.105984  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:15.606247  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:16.105236  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:16.605248  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:15.996717  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has defined MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:15.997315  949037 main.go:141] libmachine: (embed-certs-896179) DBG | unable to find current IP address of domain embed-certs-896179 in network mk-embed-certs-896179
	I0127 03:01:15.997351  949037 main.go:141] libmachine: (embed-certs-896179) DBG | I0127 03:01:15.997262  949072 retry.go:31] will retry after 2.945792597s: waiting for domain to come up
	I0127 03:01:16.240760  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:18.740354  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:17.003885  946842 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0127 03:01:17.004648  946842 api_server.go:269] stopped: https://192.168.50.96:8443/healthz: Get "https://192.168.50.96:8443/healthz": dial tcp 192.168.50.96:8443: connect: connection refused
	I0127 03:01:17.004717  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:01:17.004772  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:01:17.046426  946842 cri.go:89] found id: "ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922"
	I0127 03:01:17.046455  946842 cri.go:89] found id: ""
	I0127 03:01:17.046465  946842 logs.go:282] 1 containers: [ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922]
	I0127 03:01:17.046521  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:17.050341  946842 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:01:17.050416  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:01:17.085182  946842 cri.go:89] found id: "b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0"
	I0127 03:01:17.085212  946842 cri.go:89] found id: ""
	I0127 03:01:17.085222  946842 logs.go:282] 1 containers: [b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0]
	I0127 03:01:17.085275  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:17.089009  946842 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:01:17.089074  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:01:17.123028  946842 cri.go:89] found id: "3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef"
	I0127 03:01:17.123059  946842 cri.go:89] found id: "de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e"
	I0127 03:01:17.123065  946842 cri.go:89] found id: "9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c"
	I0127 03:01:17.123069  946842 cri.go:89] found id: "6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e"
	I0127 03:01:17.123073  946842 cri.go:89] found id: ""
	I0127 03:01:17.123082  946842 logs.go:282] 4 containers: [3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e 9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c 6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e]
	I0127 03:01:17.123148  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:17.127067  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:17.130739  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:17.134254  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:17.137682  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:01:17.137732  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:01:17.170187  946842 cri.go:89] found id: "a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a"
	I0127 03:01:17.170220  946842 cri.go:89] found id: "627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303"
	I0127 03:01:17.170226  946842 cri.go:89] found id: ""
	I0127 03:01:17.170236  946842 logs.go:282] 2 containers: [a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a 627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303]
	I0127 03:01:17.170306  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:17.174137  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:17.177781  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:01:17.177838  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:01:17.214557  946842 cri.go:89] found id: ""
	I0127 03:01:17.214584  946842 logs.go:282] 0 containers: []
	W0127 03:01:17.214592  946842 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:01:17.214599  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:01:17.214652  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:01:17.250821  946842 cri.go:89] found id: "da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e"
	I0127 03:01:17.250845  946842 cri.go:89] found id: ""
	I0127 03:01:17.250853  946842 logs.go:282] 1 containers: [da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e]
	I0127 03:01:17.250909  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:17.254705  946842 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:01:17.254765  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:01:17.289089  946842 cri.go:89] found id: ""
	I0127 03:01:17.289118  946842 logs.go:282] 0 containers: []
	W0127 03:01:17.289127  946842 logs.go:284] No container was found matching "kindnet"
	I0127 03:01:17.289133  946842 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0127 03:01:17.289183  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0127 03:01:17.323791  946842 cri.go:89] found id: "2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641"
	I0127 03:01:17.323818  946842 cri.go:89] found id: ""
	I0127 03:01:17.323826  946842 logs.go:282] 1 containers: [2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641]
	I0127 03:01:17.323877  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:17.327617  946842 logs.go:123] Gathering logs for kube-apiserver [ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922] ...
	I0127 03:01:17.327640  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922"
	I0127 03:01:17.363103  946842 logs.go:123] Gathering logs for etcd [b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0] ...
	I0127 03:01:17.363139  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0"
	I0127 03:01:17.405980  946842 logs.go:123] Gathering logs for coredns [3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef] ...
	I0127 03:01:17.406011  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef"
	I0127 03:01:17.449377  946842 logs.go:123] Gathering logs for coredns [9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c] ...
	I0127 03:01:17.449413  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c"
	I0127 03:01:17.491032  946842 logs.go:123] Gathering logs for kube-scheduler [a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a] ...
	I0127 03:01:17.491067  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a"
	I0127 03:01:17.572789  946842 logs.go:123] Gathering logs for kube-controller-manager [da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e] ...
	I0127 03:01:17.572855  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e"
	I0127 03:01:17.628608  946842 logs.go:123] Gathering logs for storage-provisioner [2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641] ...
	I0127 03:01:17.628644  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641"
	I0127 03:01:17.661813  946842 logs.go:123] Gathering logs for kubelet ...
	I0127 03:01:17.661847  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:01:17.764401  946842 logs.go:123] Gathering logs for coredns [6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e] ...
	I0127 03:01:17.764442  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e"
	I0127 03:01:17.800757  946842 logs.go:123] Gathering logs for kube-scheduler [627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303] ...
	I0127 03:01:17.800793  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303"
	I0127 03:01:17.835787  946842 logs.go:123] Gathering logs for coredns [de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e] ...
	I0127 03:01:17.835825  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e"
	I0127 03:01:17.875167  946842 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:01:17.875204  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:01:17.941376  946842 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:01:17.941406  946842 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:01:17.941423  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:01:18.260294  946842 logs.go:123] Gathering logs for container status ...
	I0127 03:01:18.260335  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:01:18.302903  946842 logs.go:123] Gathering logs for dmesg ...
	I0127 03:01:18.302936  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:01:17.105518  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:17.606164  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:18.106222  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:18.605298  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:19.106118  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:19.605959  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:20.106167  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:20.606146  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:21.105936  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:21.606155  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:18.945175  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has defined MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:18.945606  949037 main.go:141] libmachine: (embed-certs-896179) DBG | unable to find current IP address of domain embed-certs-896179 in network mk-embed-certs-896179
	I0127 03:01:18.945645  949037 main.go:141] libmachine: (embed-certs-896179) DBG | I0127 03:01:18.945577  949072 retry.go:31] will retry after 4.177718793s: waiting for domain to come up
	I0127 03:01:23.125973  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has defined MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:23.126587  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has current primary IP address 192.168.61.190 and MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:23.126606  949037 main.go:141] libmachine: (embed-certs-896179) found domain IP: 192.168.61.190
	I0127 03:01:23.126614  949037 main.go:141] libmachine: (embed-certs-896179) reserving static IP address...
	I0127 03:01:23.127118  949037 main.go:141] libmachine: (embed-certs-896179) DBG | found host DHCP lease matching {name: "embed-certs-896179", mac: "52:54:00:23:51:90", ip: "192.168.61.190"} in network mk-embed-certs-896179: {Iface:virbr3 ExpiryTime:2025-01-27 04:01:14 +0000 UTC Type:0 Mac:52:54:00:23:51:90 Iaid: IPaddr:192.168.61.190 Prefix:24 Hostname:embed-certs-896179 Clientid:01:52:54:00:23:51:90}
	I0127 03:01:23.127156  949037 main.go:141] libmachine: (embed-certs-896179) reserved static IP address 192.168.61.190 for domain embed-certs-896179
	I0127 03:01:23.127181  949037 main.go:141] libmachine: (embed-certs-896179) DBG | skip adding static IP to network mk-embed-certs-896179 - found existing host DHCP lease matching {name: "embed-certs-896179", mac: "52:54:00:23:51:90", ip: "192.168.61.190"}
	I0127 03:01:23.127198  949037 main.go:141] libmachine: (embed-certs-896179) DBG | Getting to WaitForSSH function...
	I0127 03:01:23.127214  949037 main.go:141] libmachine: (embed-certs-896179) waiting for SSH...
	I0127 03:01:23.129624  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has defined MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:23.130037  949037 main.go:141] libmachine: (embed-certs-896179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:51:90", ip: ""} in network mk-embed-certs-896179: {Iface:virbr3 ExpiryTime:2025-01-27 04:01:14 +0000 UTC Type:0 Mac:52:54:00:23:51:90 Iaid: IPaddr:192.168.61.190 Prefix:24 Hostname:embed-certs-896179 Clientid:01:52:54:00:23:51:90}
	I0127 03:01:23.130071  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has defined IP address 192.168.61.190 and MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:23.130144  949037 main.go:141] libmachine: (embed-certs-896179) DBG | Using SSH client type: external
	I0127 03:01:23.130200  949037 main.go:141] libmachine: (embed-certs-896179) DBG | Using SSH private key: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/embed-certs-896179/id_rsa (-rw-------)
	I0127 03:01:23.130255  949037 main.go:141] libmachine: (embed-certs-896179) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.190 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20316-897624/.minikube/machines/embed-certs-896179/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 03:01:23.130277  949037 main.go:141] libmachine: (embed-certs-896179) DBG | About to run SSH command:
	I0127 03:01:23.130310  949037 main.go:141] libmachine: (embed-certs-896179) DBG | exit 0
	I0127 03:01:23.257140  949037 main.go:141] libmachine: (embed-certs-896179) DBG | SSH cmd err, output: <nil>: 
	I0127 03:01:23.257557  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetConfigRaw
	I0127 03:01:23.258233  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetIP
	I0127 03:01:23.261036  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has defined MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:23.261429  949037 main.go:141] libmachine: (embed-certs-896179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:51:90", ip: ""} in network mk-embed-certs-896179: {Iface:virbr3 ExpiryTime:2025-01-27 04:01:14 +0000 UTC Type:0 Mac:52:54:00:23:51:90 Iaid: IPaddr:192.168.61.190 Prefix:24 Hostname:embed-certs-896179 Clientid:01:52:54:00:23:51:90}
	I0127 03:01:23.261458  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has defined IP address 192.168.61.190 and MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:23.261732  949037 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/embed-certs-896179/config.json ...
	I0127 03:01:23.261938  949037 machine.go:93] provisionDockerMachine start ...
	I0127 03:01:23.261957  949037 main.go:141] libmachine: (embed-certs-896179) Calling .DriverName
	I0127 03:01:23.262171  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHHostname
	I0127 03:01:23.264639  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has defined MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:23.265072  949037 main.go:141] libmachine: (embed-certs-896179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:51:90", ip: ""} in network mk-embed-certs-896179: {Iface:virbr3 ExpiryTime:2025-01-27 04:01:14 +0000 UTC Type:0 Mac:52:54:00:23:51:90 Iaid: IPaddr:192.168.61.190 Prefix:24 Hostname:embed-certs-896179 Clientid:01:52:54:00:23:51:90}
	I0127 03:01:23.265101  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has defined IP address 192.168.61.190 and MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:23.265247  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHPort
	I0127 03:01:23.265442  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHKeyPath
	I0127 03:01:23.265635  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHKeyPath
	I0127 03:01:23.265811  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHUsername
	I0127 03:01:23.265975  949037 main.go:141] libmachine: Using SSH client type: native
	I0127 03:01:23.266258  949037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.190 22 <nil> <nil>}
	I0127 03:01:23.266280  949037 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 03:01:23.373249  949037 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 03:01:23.373283  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetMachineName
	I0127 03:01:23.373517  949037 buildroot.go:166] provisioning hostname "embed-certs-896179"
	I0127 03:01:20.740614  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:23.240492  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:20.816910  946842 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0127 03:01:20.817583  946842 api_server.go:269] stopped: https://192.168.50.96:8443/healthz: Get "https://192.168.50.96:8443/healthz": dial tcp 192.168.50.96:8443: connect: connection refused
	I0127 03:01:20.817650  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:01:20.817716  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:01:20.853140  946842 cri.go:89] found id: "ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922"
	I0127 03:01:20.853164  946842 cri.go:89] found id: ""
	I0127 03:01:20.853173  946842 logs.go:282] 1 containers: [ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922]
	I0127 03:01:20.853235  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:20.857257  946842 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:01:20.857332  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:01:20.891197  946842 cri.go:89] found id: "b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0"
	I0127 03:01:20.891234  946842 cri.go:89] found id: ""
	I0127 03:01:20.891246  946842 logs.go:282] 1 containers: [b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0]
	I0127 03:01:20.891306  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:20.895675  946842 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:01:20.895754  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:01:20.930386  946842 cri.go:89] found id: "3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef"
	I0127 03:01:20.930410  946842 cri.go:89] found id: "de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e"
	I0127 03:01:20.930414  946842 cri.go:89] found id: "9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c"
	I0127 03:01:20.930417  946842 cri.go:89] found id: "6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e"
	I0127 03:01:20.930420  946842 cri.go:89] found id: ""
	I0127 03:01:20.930428  946842 logs.go:282] 4 containers: [3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e 9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c 6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e]
	I0127 03:01:20.930494  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:20.934320  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:20.938145  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:20.941584  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:20.944860  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:01:20.944935  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:01:20.980231  946842 cri.go:89] found id: "a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a"
	I0127 03:01:20.980261  946842 cri.go:89] found id: "627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303"
	I0127 03:01:20.980267  946842 cri.go:89] found id: ""
	I0127 03:01:20.980296  946842 logs.go:282] 2 containers: [a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a 627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303]
	I0127 03:01:20.980374  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:20.984388  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:20.987976  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:01:20.988030  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:01:21.022186  946842 cri.go:89] found id: ""
	I0127 03:01:21.022215  946842 logs.go:282] 0 containers: []
	W0127 03:01:21.022223  946842 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:01:21.022230  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:01:21.022280  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:01:21.064344  946842 cri.go:89] found id: "da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e"
	I0127 03:01:21.064372  946842 cri.go:89] found id: ""
	I0127 03:01:21.064382  946842 logs.go:282] 1 containers: [da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e]
	I0127 03:01:21.064432  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:21.068437  946842 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:01:21.068501  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:01:21.101972  946842 cri.go:89] found id: ""
	I0127 03:01:21.102000  946842 logs.go:282] 0 containers: []
	W0127 03:01:21.102009  946842 logs.go:284] No container was found matching "kindnet"
	I0127 03:01:21.102015  946842 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0127 03:01:21.102084  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0127 03:01:21.136116  946842 cri.go:89] found id: "2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641"
	I0127 03:01:21.136147  946842 cri.go:89] found id: ""
	I0127 03:01:21.136157  946842 logs.go:282] 1 containers: [2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641]
	I0127 03:01:21.136221  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:21.140118  946842 logs.go:123] Gathering logs for kube-apiserver [ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922] ...
	I0127 03:01:21.140145  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922"
	I0127 03:01:21.175404  946842 logs.go:123] Gathering logs for coredns [3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef] ...
	I0127 03:01:21.175441  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef"
	I0127 03:01:21.217523  946842 logs.go:123] Gathering logs for coredns [9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c] ...
	I0127 03:01:21.217555  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c"
	I0127 03:01:21.251878  946842 logs.go:123] Gathering logs for kube-scheduler [a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a] ...
	I0127 03:01:21.251911  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a"
	I0127 03:01:21.327575  946842 logs.go:123] Gathering logs for storage-provisioner [2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641] ...
	I0127 03:01:21.327619  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641"
	I0127 03:01:21.359965  946842 logs.go:123] Gathering logs for container status ...
	I0127 03:01:21.359996  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:01:21.398744  946842 logs.go:123] Gathering logs for kubelet ...
	I0127 03:01:21.398790  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:01:21.499467  946842 logs.go:123] Gathering logs for kube-scheduler [627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303] ...
	I0127 03:01:21.499510  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303"
	I0127 03:01:21.540175  946842 logs.go:123] Gathering logs for dmesg ...
	I0127 03:01:21.540208  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:01:21.553621  946842 logs.go:123] Gathering logs for coredns [de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e] ...
	I0127 03:01:21.553651  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e"
	I0127 03:01:21.596342  946842 logs.go:123] Gathering logs for coredns [6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e] ...
	I0127 03:01:21.596378  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e"
	I0127 03:01:21.635041  946842 logs.go:123] Gathering logs for kube-controller-manager [da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e] ...
	I0127 03:01:21.635072  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e"
	I0127 03:01:21.685911  946842 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:01:21.685955  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:01:21.758300  946842 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:01:21.758329  946842 logs.go:123] Gathering logs for etcd [b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0] ...
	I0127 03:01:21.758345  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0"
	I0127 03:01:21.801186  946842 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:01:21.801229  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:01:24.615930  946842 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0127 03:01:24.616718  946842 api_server.go:269] stopped: https://192.168.50.96:8443/healthz: Get "https://192.168.50.96:8443/healthz": dial tcp 192.168.50.96:8443: connect: connection refused
	I0127 03:01:24.616805  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:01:24.616871  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:01:24.655358  946842 cri.go:89] found id: "ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922"
	I0127 03:01:24.655388  946842 cri.go:89] found id: ""
	I0127 03:01:24.655398  946842 logs.go:282] 1 containers: [ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922]
	I0127 03:01:24.655452  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:24.659372  946842 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:01:24.659462  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:01:24.698614  946842 cri.go:89] found id: "b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0"
	I0127 03:01:24.698643  946842 cri.go:89] found id: ""
	I0127 03:01:24.698652  946842 logs.go:282] 1 containers: [b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0]
	I0127 03:01:24.698706  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:24.702522  946842 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:01:24.702601  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:01:24.755119  946842 cri.go:89] found id: "3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef"
	I0127 03:01:24.755151  946842 cri.go:89] found id: "de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e"
	I0127 03:01:24.755158  946842 cri.go:89] found id: "9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c"
	I0127 03:01:24.755163  946842 cri.go:89] found id: "6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e"
	I0127 03:01:24.755167  946842 cri.go:89] found id: ""
	I0127 03:01:24.755178  946842 logs.go:282] 4 containers: [3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e 9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c 6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e]
	I0127 03:01:24.755249  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:24.759592  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:24.763532  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:24.768035  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:24.771693  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:01:24.771757  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:01:24.811719  946842 cri.go:89] found id: "a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a"
	I0127 03:01:24.811743  946842 cri.go:89] found id: "627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303"
	I0127 03:01:24.811748  946842 cri.go:89] found id: ""
	I0127 03:01:24.811759  946842 logs.go:282] 2 containers: [a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a 627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303]
	I0127 03:01:24.811820  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:24.816077  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:24.819881  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:01:24.819945  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:01:24.858947  946842 cri.go:89] found id: ""
	I0127 03:01:24.858981  946842 logs.go:282] 0 containers: []
	W0127 03:01:24.858989  946842 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:01:24.858996  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:01:24.859082  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:01:24.893564  946842 cri.go:89] found id: "da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e"
	I0127 03:01:24.893601  946842 cri.go:89] found id: ""
	I0127 03:01:24.893628  946842 logs.go:282] 1 containers: [da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e]
	I0127 03:01:24.893704  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:24.897692  946842 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:01:24.897761  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:01:24.934084  946842 cri.go:89] found id: ""
	I0127 03:01:24.934116  946842 logs.go:282] 0 containers: []
	W0127 03:01:24.934136  946842 logs.go:284] No container was found matching "kindnet"
	I0127 03:01:24.934144  946842 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0127 03:01:24.934207  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0127 03:01:24.967622  946842 cri.go:89] found id: "2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641"
	I0127 03:01:24.967652  946842 cri.go:89] found id: ""
	I0127 03:01:24.967664  946842 logs.go:282] 1 containers: [2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641]
	I0127 03:01:24.967731  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:24.971842  946842 logs.go:123] Gathering logs for coredns [de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e] ...
	I0127 03:01:24.971885  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e"
	I0127 03:01:23.373558  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetMachineName
	I0127 03:01:23.373835  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHHostname
	I0127 03:01:23.376597  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has defined MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:23.376971  949037 main.go:141] libmachine: (embed-certs-896179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:51:90", ip: ""} in network mk-embed-certs-896179: {Iface:virbr3 ExpiryTime:2025-01-27 04:01:14 +0000 UTC Type:0 Mac:52:54:00:23:51:90 Iaid: IPaddr:192.168.61.190 Prefix:24 Hostname:embed-certs-896179 Clientid:01:52:54:00:23:51:90}
	I0127 03:01:23.377003  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has defined IP address 192.168.61.190 and MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:23.377225  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHPort
	I0127 03:01:23.377438  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHKeyPath
	I0127 03:01:23.377625  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHKeyPath
	I0127 03:01:23.377801  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHUsername
	I0127 03:01:23.377974  949037 main.go:141] libmachine: Using SSH client type: native
	I0127 03:01:23.378194  949037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.190 22 <nil> <nil>}
	I0127 03:01:23.378208  949037 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-896179 && echo "embed-certs-896179" | sudo tee /etc/hostname
	I0127 03:01:23.499654  949037 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-896179
	
	I0127 03:01:23.499689  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHHostname
	I0127 03:01:23.502681  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has defined MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:23.503048  949037 main.go:141] libmachine: (embed-certs-896179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:51:90", ip: ""} in network mk-embed-certs-896179: {Iface:virbr3 ExpiryTime:2025-01-27 04:01:14 +0000 UTC Type:0 Mac:52:54:00:23:51:90 Iaid: IPaddr:192.168.61.190 Prefix:24 Hostname:embed-certs-896179 Clientid:01:52:54:00:23:51:90}
	I0127 03:01:23.503080  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has defined IP address 192.168.61.190 and MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:23.503217  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHPort
	I0127 03:01:23.503405  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHKeyPath
	I0127 03:01:23.503601  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHKeyPath
	I0127 03:01:23.503741  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHUsername
	I0127 03:01:23.503894  949037 main.go:141] libmachine: Using SSH client type: native
	I0127 03:01:23.504077  949037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.190 22 <nil> <nil>}
	I0127 03:01:23.504093  949037 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-896179' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-896179/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-896179' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 03:01:23.618467  949037 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 03:01:23.618503  949037 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20316-897624/.minikube CaCertPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20316-897624/.minikube}
	I0127 03:01:23.618524  949037 buildroot.go:174] setting up certificates
	I0127 03:01:23.618536  949037 provision.go:84] configureAuth start
	I0127 03:01:23.618546  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetMachineName
	I0127 03:01:23.618886  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetIP
	I0127 03:01:23.622102  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has defined MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:23.622484  949037 main.go:141] libmachine: (embed-certs-896179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:51:90", ip: ""} in network mk-embed-certs-896179: {Iface:virbr3 ExpiryTime:2025-01-27 04:01:14 +0000 UTC Type:0 Mac:52:54:00:23:51:90 Iaid: IPaddr:192.168.61.190 Prefix:24 Hostname:embed-certs-896179 Clientid:01:52:54:00:23:51:90}
	I0127 03:01:23.622515  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has defined IP address 192.168.61.190 and MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:23.622635  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHHostname
	I0127 03:01:23.625136  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has defined MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:23.625490  949037 main.go:141] libmachine: (embed-certs-896179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:51:90", ip: ""} in network mk-embed-certs-896179: {Iface:virbr3 ExpiryTime:2025-01-27 04:01:14 +0000 UTC Type:0 Mac:52:54:00:23:51:90 Iaid: IPaddr:192.168.61.190 Prefix:24 Hostname:embed-certs-896179 Clientid:01:52:54:00:23:51:90}
	I0127 03:01:23.625541  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has defined IP address 192.168.61.190 and MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:23.625685  949037 provision.go:143] copyHostCerts
	I0127 03:01:23.625750  949037 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-897624/.minikube/ca.pem, removing ...
	I0127 03:01:23.625761  949037 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-897624/.minikube/ca.pem
	I0127 03:01:23.625853  949037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20316-897624/.minikube/ca.pem (1078 bytes)
	I0127 03:01:23.626012  949037 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-897624/.minikube/cert.pem, removing ...
	I0127 03:01:23.626024  949037 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-897624/.minikube/cert.pem
	I0127 03:01:23.626053  949037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20316-897624/.minikube/cert.pem (1123 bytes)
	I0127 03:01:23.626110  949037 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-897624/.minikube/key.pem, removing ...
	I0127 03:01:23.626118  949037 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-897624/.minikube/key.pem
	I0127 03:01:23.626142  949037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20316-897624/.minikube/key.pem (1679 bytes)
	I0127 03:01:23.626198  949037 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca-key.pem org=jenkins.embed-certs-896179 san=[127.0.0.1 192.168.61.190 embed-certs-896179 localhost minikube]
	I0127 03:01:23.823099  949037 provision.go:177] copyRemoteCerts
	I0127 03:01:23.823160  949037 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 03:01:23.823187  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHHostname
	I0127 03:01:23.826021  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has defined MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:23.826359  949037 main.go:141] libmachine: (embed-certs-896179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:51:90", ip: ""} in network mk-embed-certs-896179: {Iface:virbr3 ExpiryTime:2025-01-27 04:01:14 +0000 UTC Type:0 Mac:52:54:00:23:51:90 Iaid: IPaddr:192.168.61.190 Prefix:24 Hostname:embed-certs-896179 Clientid:01:52:54:00:23:51:90}
	I0127 03:01:23.826381  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has defined IP address 192.168.61.190 and MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:23.826586  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHPort
	I0127 03:01:23.826782  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHKeyPath
	I0127 03:01:23.826930  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHUsername
	I0127 03:01:23.827059  949037 sshutil.go:53] new ssh client: &{IP:192.168.61.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/embed-certs-896179/id_rsa Username:docker}
	I0127 03:01:23.911704  949037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0127 03:01:23.936351  949037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 03:01:23.958916  949037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 03:01:23.981711  949037 provision.go:87] duration metric: took 363.15967ms to configureAuth
	I0127 03:01:23.981746  949037 buildroot.go:189] setting minikube options for container-runtime
	I0127 03:01:23.981992  949037 config.go:182] Loaded profile config "embed-certs-896179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 03:01:23.982094  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHHostname
	I0127 03:01:23.984682  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has defined MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:23.985133  949037 main.go:141] libmachine: (embed-certs-896179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:51:90", ip: ""} in network mk-embed-certs-896179: {Iface:virbr3 ExpiryTime:2025-01-27 04:01:14 +0000 UTC Type:0 Mac:52:54:00:23:51:90 Iaid: IPaddr:192.168.61.190 Prefix:24 Hostname:embed-certs-896179 Clientid:01:52:54:00:23:51:90}
	I0127 03:01:23.985169  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has defined IP address 192.168.61.190 and MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:23.985377  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHPort
	I0127 03:01:23.985595  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHKeyPath
	I0127 03:01:23.985738  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHKeyPath
	I0127 03:01:23.985852  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHUsername
	I0127 03:01:23.986084  949037 main.go:141] libmachine: Using SSH client type: native
	I0127 03:01:23.986254  949037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.190 22 <nil> <nil>}
	I0127 03:01:23.986312  949037 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 03:01:24.205343  949037 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 03:01:24.205373  949037 machine.go:96] duration metric: took 943.421451ms to provisionDockerMachine
	I0127 03:01:24.205390  949037 start.go:293] postStartSetup for "embed-certs-896179" (driver="kvm2")
	I0127 03:01:24.205406  949037 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 03:01:24.205451  949037 main.go:141] libmachine: (embed-certs-896179) Calling .DriverName
	I0127 03:01:24.205806  949037 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 03:01:24.205864  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHHostname
	I0127 03:01:24.208622  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has defined MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:24.209035  949037 main.go:141] libmachine: (embed-certs-896179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:51:90", ip: ""} in network mk-embed-certs-896179: {Iface:virbr3 ExpiryTime:2025-01-27 04:01:14 +0000 UTC Type:0 Mac:52:54:00:23:51:90 Iaid: IPaddr:192.168.61.190 Prefix:24 Hostname:embed-certs-896179 Clientid:01:52:54:00:23:51:90}
	I0127 03:01:24.209073  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has defined IP address 192.168.61.190 and MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:24.209182  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHPort
	I0127 03:01:24.209375  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHKeyPath
	I0127 03:01:24.209528  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHUsername
	I0127 03:01:24.209660  949037 sshutil.go:53] new ssh client: &{IP:192.168.61.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/embed-certs-896179/id_rsa Username:docker}
	I0127 03:01:24.290961  949037 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 03:01:24.295413  949037 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 03:01:24.295445  949037 filesync.go:126] Scanning /home/jenkins/minikube-integration/20316-897624/.minikube/addons for local assets ...
	I0127 03:01:24.295506  949037 filesync.go:126] Scanning /home/jenkins/minikube-integration/20316-897624/.minikube/files for local assets ...
	I0127 03:01:24.295589  949037 filesync.go:149] local asset: /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem -> 9048892.pem in /etc/ssl/certs
	I0127 03:01:24.295727  949037 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 03:01:24.305578  949037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem --> /etc/ssl/certs/9048892.pem (1708 bytes)
	I0127 03:01:24.329875  949037 start.go:296] duration metric: took 124.46627ms for postStartSetup
	I0127 03:01:24.329921  949037 fix.go:56] duration metric: took 20.848385772s for fixHost
	I0127 03:01:24.329945  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHHostname
	I0127 03:01:24.332820  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has defined MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:24.333314  949037 main.go:141] libmachine: (embed-certs-896179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:51:90", ip: ""} in network mk-embed-certs-896179: {Iface:virbr3 ExpiryTime:2025-01-27 04:01:14 +0000 UTC Type:0 Mac:52:54:00:23:51:90 Iaid: IPaddr:192.168.61.190 Prefix:24 Hostname:embed-certs-896179 Clientid:01:52:54:00:23:51:90}
	I0127 03:01:24.333348  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has defined IP address 192.168.61.190 and MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:24.333585  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHPort
	I0127 03:01:24.333795  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHKeyPath
	I0127 03:01:24.333997  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHKeyPath
	I0127 03:01:24.334157  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHUsername
	I0127 03:01:24.334319  949037 main.go:141] libmachine: Using SSH client type: native
	I0127 03:01:24.334554  949037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.190 22 <nil> <nil>}
	I0127 03:01:24.334568  949037 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 03:01:24.441403  949037 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737946884.398687629
	
	I0127 03:01:24.441428  949037 fix.go:216] guest clock: 1737946884.398687629
	I0127 03:01:24.441435  949037 fix.go:229] Guest: 2025-01-27 03:01:24.398687629 +0000 UTC Remote: 2025-01-27 03:01:24.329925628 +0000 UTC m=+20.995981124 (delta=68.762001ms)
	I0127 03:01:24.441478  949037 fix.go:200] guest clock delta is within tolerance: 68.762001ms
	I0127 03:01:24.441483  949037 start.go:83] releasing machines lock for "embed-certs-896179", held for 20.959960608s
	I0127 03:01:24.441515  949037 main.go:141] libmachine: (embed-certs-896179) Calling .DriverName
	I0127 03:01:24.441800  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetIP
	I0127 03:01:24.444810  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has defined MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:24.445207  949037 main.go:141] libmachine: (embed-certs-896179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:51:90", ip: ""} in network mk-embed-certs-896179: {Iface:virbr3 ExpiryTime:2025-01-27 04:01:14 +0000 UTC Type:0 Mac:52:54:00:23:51:90 Iaid: IPaddr:192.168.61.190 Prefix:24 Hostname:embed-certs-896179 Clientid:01:52:54:00:23:51:90}
	I0127 03:01:24.445230  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has defined IP address 192.168.61.190 and MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:24.445406  949037 main.go:141] libmachine: (embed-certs-896179) Calling .DriverName
	I0127 03:01:24.445919  949037 main.go:141] libmachine: (embed-certs-896179) Calling .DriverName
	I0127 03:01:24.446127  949037 main.go:141] libmachine: (embed-certs-896179) Calling .DriverName
	I0127 03:01:24.446263  949037 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 03:01:24.446316  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHHostname
	I0127 03:01:24.446361  949037 ssh_runner.go:195] Run: cat /version.json
	I0127 03:01:24.446388  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHHostname
	I0127 03:01:24.449112  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has defined MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:24.449461  949037 main.go:141] libmachine: (embed-certs-896179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:51:90", ip: ""} in network mk-embed-certs-896179: {Iface:virbr3 ExpiryTime:2025-01-27 04:01:14 +0000 UTC Type:0 Mac:52:54:00:23:51:90 Iaid: IPaddr:192.168.61.190 Prefix:24 Hostname:embed-certs-896179 Clientid:01:52:54:00:23:51:90}
	I0127 03:01:24.449489  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has defined IP address 192.168.61.190 and MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:24.449554  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has defined MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:24.449663  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHPort
	I0127 03:01:24.449841  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHKeyPath
	I0127 03:01:24.450017  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHUsername
	I0127 03:01:24.450056  949037 main.go:141] libmachine: (embed-certs-896179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:51:90", ip: ""} in network mk-embed-certs-896179: {Iface:virbr3 ExpiryTime:2025-01-27 04:01:14 +0000 UTC Type:0 Mac:52:54:00:23:51:90 Iaid: IPaddr:192.168.61.190 Prefix:24 Hostname:embed-certs-896179 Clientid:01:52:54:00:23:51:90}
	I0127 03:01:24.450089  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has defined IP address 192.168.61.190 and MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:24.450197  949037 sshutil.go:53] new ssh client: &{IP:192.168.61.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/embed-certs-896179/id_rsa Username:docker}
	I0127 03:01:24.450263  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHPort
	I0127 03:01:24.450404  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHKeyPath
	I0127 03:01:24.450581  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHUsername
	I0127 03:01:24.450749  949037 sshutil.go:53] new ssh client: &{IP:192.168.61.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/embed-certs-896179/id_rsa Username:docker}
	I0127 03:01:24.558564  949037 ssh_runner.go:195] Run: systemctl --version
	I0127 03:01:24.564534  949037 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 03:01:24.712379  949037 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 03:01:24.718574  949037 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 03:01:24.718657  949037 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 03:01:24.739961  949037 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 03:01:24.739994  949037 start.go:495] detecting cgroup driver to use...
	I0127 03:01:24.740104  949037 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 03:01:24.758316  949037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 03:01:24.774155  949037 docker.go:217] disabling cri-docker service (if available) ...
	I0127 03:01:24.774215  949037 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 03:01:24.790946  949037 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 03:01:24.807156  949037 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 03:01:24.929993  949037 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 03:01:25.105788  949037 docker.go:233] disabling docker service ...
	I0127 03:01:25.105868  949037 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 03:01:25.127517  949037 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 03:01:25.141075  949037 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 03:01:25.253058  949037 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 03:01:25.385842  949037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 03:01:25.401856  949037 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 03:01:25.421411  949037 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 03:01:25.421486  949037 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:01:25.433414  949037 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 03:01:25.433501  949037 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:01:25.446359  949037 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:01:25.456995  949037 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:01:25.468448  949037 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 03:01:25.478809  949037 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:01:25.488650  949037 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:01:25.507331  949037 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:01:25.517884  949037 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 03:01:25.528887  949037 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 03:01:25.528973  949037 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 03:01:25.547574  949037 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 03:01:25.560484  949037 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 03:01:25.681352  949037 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 03:01:25.794918  949037 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 03:01:25.794994  949037 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 03:01:25.799773  949037 start.go:563] Will wait 60s for crictl version
	I0127 03:01:25.799833  949037 ssh_runner.go:195] Run: which crictl
	I0127 03:01:25.803580  949037 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 03:01:25.841185  949037 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 03:01:25.841293  949037 ssh_runner.go:195] Run: crio --version
	I0127 03:01:25.873253  949037 ssh_runner.go:195] Run: crio --version
	I0127 03:01:25.912212  949037 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 03:01:22.105884  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:22.605380  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:23.106160  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:23.606158  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:24.105161  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:24.606188  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:25.105579  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:25.605228  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:26.106135  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:26.605709  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:01:26.605812  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:01:26.656718  948597 cri.go:89] found id: ""
	I0127 03:01:26.656752  948597 logs.go:282] 0 containers: []
	W0127 03:01:26.656764  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:01:26.656774  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:01:26.656857  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:01:26.702459  948597 cri.go:89] found id: ""
	I0127 03:01:26.702493  948597 logs.go:282] 0 containers: []
	W0127 03:01:26.702506  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:01:26.702516  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:01:26.702610  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:01:26.752122  948597 cri.go:89] found id: ""
	I0127 03:01:26.752158  948597 logs.go:282] 0 containers: []
	W0127 03:01:26.752170  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:01:26.752178  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:01:26.752243  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:01:26.793710  948597 cri.go:89] found id: ""
	I0127 03:01:26.793745  948597 logs.go:282] 0 containers: []
	W0127 03:01:26.793757  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:01:26.793765  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:01:26.793831  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:01:25.913432  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetIP
	I0127 03:01:25.916372  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has defined MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:25.916729  949037 main.go:141] libmachine: (embed-certs-896179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:51:90", ip: ""} in network mk-embed-certs-896179: {Iface:virbr3 ExpiryTime:2025-01-27 04:01:14 +0000 UTC Type:0 Mac:52:54:00:23:51:90 Iaid: IPaddr:192.168.61.190 Prefix:24 Hostname:embed-certs-896179 Clientid:01:52:54:00:23:51:90}
	I0127 03:01:25.916762  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has defined IP address 192.168.61.190 and MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:25.917038  949037 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0127 03:01:25.922131  949037 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 03:01:25.938120  949037 kubeadm.go:883] updating cluster {Name:embed-certs-896179 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-896179 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.190 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 03:01:25.938248  949037 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 03:01:25.938325  949037 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 03:01:25.983328  949037 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0127 03:01:25.983422  949037 ssh_runner.go:195] Run: which lz4
	I0127 03:01:25.987556  949037 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 03:01:25.991634  949037 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 03:01:25.991680  949037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0127 03:01:27.339969  949037 crio.go:462] duration metric: took 1.352459689s to copy over tarball
	I0127 03:01:27.340052  949037 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 03:01:25.242114  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:27.243151  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:29.740819  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:25.012722  946842 logs.go:123] Gathering logs for storage-provisioner [2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641] ...
	I0127 03:01:25.012757  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641"
	I0127 03:01:25.046438  946842 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:01:25.046477  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:01:25.402887  946842 logs.go:123] Gathering logs for kubelet ...
	I0127 03:01:25.402915  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:01:25.513108  946842 logs.go:123] Gathering logs for dmesg ...
	I0127 03:01:25.513145  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:01:25.528503  946842 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:01:25.528540  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:01:25.622713  946842 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:01:25.622809  946842 logs.go:123] Gathering logs for coredns [6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e] ...
	I0127 03:01:25.622831  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e"
	I0127 03:01:25.657704  946842 logs.go:123] Gathering logs for coredns [3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef] ...
	I0127 03:01:25.657741  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef"
	I0127 03:01:25.703851  946842 logs.go:123] Gathering logs for coredns [9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c] ...
	I0127 03:01:25.703897  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c"
	I0127 03:01:25.745498  946842 logs.go:123] Gathering logs for kube-controller-manager [da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e] ...
	I0127 03:01:25.745536  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e"
	I0127 03:01:25.799055  946842 logs.go:123] Gathering logs for container status ...
	I0127 03:01:25.799095  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:01:25.847880  946842 logs.go:123] Gathering logs for kube-apiserver [ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922] ...
	I0127 03:01:25.847927  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922"
	I0127 03:01:25.891994  946842 logs.go:123] Gathering logs for etcd [b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0] ...
	I0127 03:01:25.892047  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0"
	I0127 03:01:25.942118  946842 logs.go:123] Gathering logs for kube-scheduler [a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a] ...
	I0127 03:01:25.942154  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a"
	I0127 03:01:26.028505  946842 logs.go:123] Gathering logs for kube-scheduler [627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303] ...
	I0127 03:01:26.028549  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303"
	I0127 03:01:28.574456  946842 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0127 03:01:28.575204  946842 api_server.go:269] stopped: https://192.168.50.96:8443/healthz: Get "https://192.168.50.96:8443/healthz": dial tcp 192.168.50.96:8443: connect: connection refused
	I0127 03:01:28.575267  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:01:28.575330  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:01:28.616702  946842 cri.go:89] found id: "ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922"
	I0127 03:01:28.616742  946842 cri.go:89] found id: ""
	I0127 03:01:28.616754  946842 logs.go:282] 1 containers: [ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922]
	I0127 03:01:28.616831  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:28.621125  946842 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:01:28.621211  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:01:28.658449  946842 cri.go:89] found id: "b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0"
	I0127 03:01:28.658482  946842 cri.go:89] found id: ""
	I0127 03:01:28.658493  946842 logs.go:282] 1 containers: [b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0]
	I0127 03:01:28.658560  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:28.662830  946842 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:01:28.662922  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:01:28.702024  946842 cri.go:89] found id: "3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef"
	I0127 03:01:28.702056  946842 cri.go:89] found id: "de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e"
	I0127 03:01:28.702062  946842 cri.go:89] found id: "9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c"
	I0127 03:01:28.702066  946842 cri.go:89] found id: "6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e"
	I0127 03:01:28.702070  946842 cri.go:89] found id: ""
	I0127 03:01:28.702091  946842 logs.go:282] 4 containers: [3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e 9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c 6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e]
	I0127 03:01:28.702167  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:28.708821  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:28.713696  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:28.718778  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:28.723658  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:01:28.723736  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:01:28.767332  946842 cri.go:89] found id: "a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a"
	I0127 03:01:28.767360  946842 cri.go:89] found id: "627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303"
	I0127 03:01:28.767366  946842 cri.go:89] found id: ""
	I0127 03:01:28.767377  946842 logs.go:282] 2 containers: [a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a 627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303]
	I0127 03:01:28.767429  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:28.771576  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:28.775402  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:01:28.775474  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:01:28.813236  946842 cri.go:89] found id: ""
	I0127 03:01:28.813266  946842 logs.go:282] 0 containers: []
	W0127 03:01:28.813274  946842 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:01:28.813282  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:01:28.813356  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:01:28.848158  946842 cri.go:89] found id: "da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e"
	I0127 03:01:28.848185  946842 cri.go:89] found id: ""
	I0127 03:01:28.848193  946842 logs.go:282] 1 containers: [da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e]
	I0127 03:01:28.848248  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:28.853019  946842 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:01:28.853091  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:01:28.887789  946842 cri.go:89] found id: ""
	I0127 03:01:28.887837  946842 logs.go:282] 0 containers: []
	W0127 03:01:28.887849  946842 logs.go:284] No container was found matching "kindnet"
	I0127 03:01:28.887861  946842 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0127 03:01:28.887937  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0127 03:01:28.935998  946842 cri.go:89] found id: "2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641"
	I0127 03:01:28.936034  946842 cri.go:89] found id: ""
	I0127 03:01:28.936046  946842 logs.go:282] 1 containers: [2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641]
	I0127 03:01:28.936123  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:28.941447  946842 logs.go:123] Gathering logs for coredns [6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e] ...
	I0127 03:01:28.941483  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e"
	I0127 03:01:28.978976  946842 logs.go:123] Gathering logs for kube-scheduler [a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a] ...
	I0127 03:01:28.979024  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a"
	I0127 03:01:29.069289  946842 logs.go:123] Gathering logs for etcd [b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0] ...
	I0127 03:01:29.069335  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0"
	I0127 03:01:29.119285  946842 logs.go:123] Gathering logs for coredns [3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef] ...
	I0127 03:01:29.119342  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef"
	I0127 03:01:29.163745  946842 logs.go:123] Gathering logs for storage-provisioner [2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641] ...
	I0127 03:01:29.163783  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641"
	I0127 03:01:29.210716  946842 logs.go:123] Gathering logs for container status ...
	I0127 03:01:29.210760  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:01:29.274619  946842 logs.go:123] Gathering logs for kubelet ...
	I0127 03:01:29.274658  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:01:29.387281  946842 logs.go:123] Gathering logs for kube-controller-manager [da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e] ...
	I0127 03:01:29.387327  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e"
	I0127 03:01:29.446525  946842 logs.go:123] Gathering logs for kube-apiserver [ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922] ...
	I0127 03:01:29.446562  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922"
	I0127 03:01:29.491286  946842 logs.go:123] Gathering logs for coredns [de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e] ...
	I0127 03:01:29.491320  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e"
	I0127 03:01:29.543927  946842 logs.go:123] Gathering logs for coredns [9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c] ...
	I0127 03:01:29.543963  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c"
	I0127 03:01:29.588420  946842 logs.go:123] Gathering logs for kube-scheduler [627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303] ...
	I0127 03:01:29.588454  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303"
	I0127 03:01:29.623895  946842 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:01:29.623933  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:01:26.839972  948597 cri.go:89] found id: ""
	I0127 03:01:26.840011  948597 logs.go:282] 0 containers: []
	W0127 03:01:26.840023  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:01:26.840030  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:01:26.840105  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:01:26.882134  948597 cri.go:89] found id: ""
	I0127 03:01:26.882190  948597 logs.go:282] 0 containers: []
	W0127 03:01:26.882204  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:01:26.882212  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:01:26.882285  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:01:26.927229  948597 cri.go:89] found id: ""
	I0127 03:01:26.927265  948597 logs.go:282] 0 containers: []
	W0127 03:01:26.927278  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:01:26.927287  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:01:26.927365  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:01:26.970472  948597 cri.go:89] found id: ""
	I0127 03:01:26.970508  948597 logs.go:282] 0 containers: []
	W0127 03:01:26.970521  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:01:26.970535  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:01:26.970552  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:01:27.038341  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:01:27.038375  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:01:27.056989  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:01:27.057027  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:01:27.251883  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:01:27.251913  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:01:27.251931  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:01:27.338605  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:01:27.338645  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:01:29.883659  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:29.899963  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:01:29.900074  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:01:29.946862  948597 cri.go:89] found id: ""
	I0127 03:01:29.946890  948597 logs.go:282] 0 containers: []
	W0127 03:01:29.946900  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:01:29.946909  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:01:29.946962  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:01:29.988020  948597 cri.go:89] found id: ""
	I0127 03:01:29.988063  948597 logs.go:282] 0 containers: []
	W0127 03:01:29.988075  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:01:29.988083  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:01:29.988148  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:01:30.029188  948597 cri.go:89] found id: ""
	I0127 03:01:30.029217  948597 logs.go:282] 0 containers: []
	W0127 03:01:30.029228  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:01:30.029236  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:01:30.029323  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:01:30.078544  948597 cri.go:89] found id: ""
	I0127 03:01:30.078578  948597 logs.go:282] 0 containers: []
	W0127 03:01:30.078588  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:01:30.078597  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:01:30.078659  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:01:30.119963  948597 cri.go:89] found id: ""
	I0127 03:01:30.119999  948597 logs.go:282] 0 containers: []
	W0127 03:01:30.120067  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:01:30.120085  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:01:30.120182  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:01:30.158221  948597 cri.go:89] found id: ""
	I0127 03:01:30.158256  948597 logs.go:282] 0 containers: []
	W0127 03:01:30.158269  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:01:30.158277  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:01:30.158345  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:01:30.193422  948597 cri.go:89] found id: ""
	I0127 03:01:30.193465  948597 logs.go:282] 0 containers: []
	W0127 03:01:30.193476  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:01:30.193484  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:01:30.193549  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:01:30.239030  948597 cri.go:89] found id: ""
	I0127 03:01:30.239065  948597 logs.go:282] 0 containers: []
	W0127 03:01:30.239076  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:01:30.239090  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:01:30.239105  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:01:30.296486  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:01:30.296527  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:01:30.317398  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:01:30.317431  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:01:30.430177  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:01:30.430213  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:01:30.430233  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:01:30.514902  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:01:30.514955  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:01:29.544639  949037 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.204537838s)
	I0127 03:01:29.544675  949037 crio.go:469] duration metric: took 2.204676764s to extract the tarball
	I0127 03:01:29.544687  949037 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 03:01:29.590990  949037 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 03:01:29.637902  949037 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 03:01:29.637937  949037 cache_images.go:84] Images are preloaded, skipping loading
	I0127 03:01:29.637948  949037 kubeadm.go:934] updating node { 192.168.61.190 8443 v1.32.1 crio true true} ...
	I0127 03:01:29.638103  949037 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-896179 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.190
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:embed-certs-896179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 03:01:29.638195  949037 ssh_runner.go:195] Run: crio config
	I0127 03:01:29.696180  949037 cni.go:84] Creating CNI manager for ""
	I0127 03:01:29.696205  949037 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 03:01:29.696215  949037 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 03:01:29.696237  949037 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.190 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-896179 NodeName:embed-certs-896179 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.190"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.190 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 03:01:29.696398  949037 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.190
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-896179"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.190"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.190"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 03:01:29.696468  949037 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 03:01:29.706341  949037 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 03:01:29.706430  949037 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 03:01:29.716295  949037 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0127 03:01:29.735234  949037 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 03:01:29.754398  949037 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I0127 03:01:29.772277  949037 ssh_runner.go:195] Run: grep 192.168.61.190	control-plane.minikube.internal$ /etc/hosts
	I0127 03:01:29.776019  949037 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.190	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 03:01:29.788959  949037 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 03:01:29.923329  949037 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 03:01:29.942266  949037 certs.go:68] Setting up /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/embed-certs-896179 for IP: 192.168.61.190
	I0127 03:01:29.942298  949037 certs.go:194] generating shared ca certs ...
	I0127 03:01:29.942321  949037 certs.go:226] acquiring lock for ca certs: {Name:mk2a0a86c4d196cb3cdcd1c836209954479044e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:01:29.942504  949037 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20316-897624/.minikube/ca.key
	I0127 03:01:29.942557  949037 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20316-897624/.minikube/proxy-client-ca.key
	I0127 03:01:29.942570  949037 certs.go:256] generating profile certs ...
	I0127 03:01:29.942678  949037 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/embed-certs-896179/client.key
	I0127 03:01:29.942750  949037 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/embed-certs-896179/apiserver.key.d6311847
	I0127 03:01:29.942828  949037 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/embed-certs-896179/proxy-client.key
	I0127 03:01:29.942997  949037 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/904889.pem (1338 bytes)
	W0127 03:01:29.943037  949037 certs.go:480] ignoring /home/jenkins/minikube-integration/20316-897624/.minikube/certs/904889_empty.pem, impossibly tiny 0 bytes
	I0127 03:01:29.943048  949037 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 03:01:29.943082  949037 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem (1078 bytes)
	I0127 03:01:29.943115  949037 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/cert.pem (1123 bytes)
	I0127 03:01:29.943152  949037 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/key.pem (1679 bytes)
	I0127 03:01:29.943203  949037 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem (1708 bytes)
	I0127 03:01:29.944139  949037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 03:01:29.983233  949037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 03:01:30.030494  949037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 03:01:30.073486  949037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 03:01:30.120672  949037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/embed-certs-896179/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0127 03:01:30.146355  949037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/embed-certs-896179/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 03:01:30.184518  949037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/embed-certs-896179/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 03:01:30.209074  949037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/embed-certs-896179/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 03:01:30.241307  949037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 03:01:30.267713  949037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/certs/904889.pem --> /usr/share/ca-certificates/904889.pem (1338 bytes)
	I0127 03:01:30.296183  949037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem --> /usr/share/ca-certificates/9048892.pem (1708 bytes)
	I0127 03:01:30.327341  949037 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 03:01:30.344843  949037 ssh_runner.go:195] Run: openssl version
	I0127 03:01:30.351146  949037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 03:01:30.362751  949037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 03:01:30.368411  949037 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 01:48 /usr/share/ca-certificates/minikubeCA.pem
	I0127 03:01:30.368496  949037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 03:01:30.374763  949037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 03:01:30.386692  949037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/904889.pem && ln -fs /usr/share/ca-certificates/904889.pem /etc/ssl/certs/904889.pem"
	I0127 03:01:30.398115  949037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/904889.pem
	I0127 03:01:30.403030  949037 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 01:57 /usr/share/ca-certificates/904889.pem
	I0127 03:01:30.403108  949037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/904889.pem
	I0127 03:01:30.410110  949037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/904889.pem /etc/ssl/certs/51391683.0"
	I0127 03:01:30.422809  949037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9048892.pem && ln -fs /usr/share/ca-certificates/9048892.pem /etc/ssl/certs/9048892.pem"
	I0127 03:01:30.436511  949037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9048892.pem
	I0127 03:01:30.441355  949037 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 01:57 /usr/share/ca-certificates/9048892.pem
	I0127 03:01:30.441419  949037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9048892.pem
	I0127 03:01:30.447373  949037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9048892.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 03:01:30.461158  949037 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 03:01:30.467011  949037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 03:01:30.473239  949037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 03:01:30.479600  949037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 03:01:30.485492  949037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 03:01:30.491623  949037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 03:01:30.497845  949037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 03:01:30.503762  949037 kubeadm.go:392] StartCluster: {Name:embed-certs-896179 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-896179 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.190 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 03:01:30.503853  949037 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 03:01:30.503946  949037 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 03:01:30.549050  949037 cri.go:89] found id: ""
	I0127 03:01:30.549154  949037 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 03:01:30.560763  949037 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 03:01:30.560784  949037 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 03:01:30.560831  949037 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 03:01:30.571785  949037 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 03:01:30.572597  949037 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-896179" does not appear in /home/jenkins/minikube-integration/20316-897624/kubeconfig
	I0127 03:01:30.573083  949037 kubeconfig.go:62] /home/jenkins/minikube-integration/20316-897624/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-896179" cluster setting kubeconfig missing "embed-certs-896179" context setting]
	I0127 03:01:30.573634  949037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/kubeconfig: {Name:mkcd87672341028e989830590061bc5270733978 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:01:30.575167  949037 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 03:01:30.584812  949037 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.190
	I0127 03:01:30.584848  949037 kubeadm.go:1160] stopping kube-system containers ...
	I0127 03:01:30.584864  949037 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0127 03:01:30.584916  949037 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 03:01:30.619991  949037 cri.go:89] found id: ""
	I0127 03:01:30.620084  949037 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 03:01:30.637655  949037 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 03:01:30.647653  949037 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 03:01:30.647684  949037 kubeadm.go:157] found existing configuration files:
	
	I0127 03:01:30.647742  949037 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 03:01:30.656608  949037 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 03:01:30.656687  949037 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 03:01:30.665981  949037 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 03:01:30.675981  949037 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 03:01:30.676052  949037 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 03:01:30.685120  949037 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 03:01:30.693917  949037 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 03:01:30.693982  949037 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 03:01:30.704756  949037 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 03:01:30.715115  949037 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 03:01:30.715195  949037 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 03:01:30.725782  949037 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 03:01:30.736059  949037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 03:01:30.846936  949037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 03:01:31.751719  949037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 03:01:31.946558  949037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 03:01:32.007372  949037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 03:01:32.072559  949037 api_server.go:52] waiting for apiserver process to appear ...
	I0127 03:01:32.072641  949037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:32.572982  949037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:33.073061  949037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:31.740981  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:33.742131  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:30.027668  946842 logs.go:123] Gathering logs for dmesg ...
	I0127 03:01:30.027723  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:01:30.046767  946842 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:01:30.046808  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:01:30.131827  946842 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:01:32.633916  946842 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0127 03:01:32.634588  946842 api_server.go:269] stopped: https://192.168.50.96:8443/healthz: Get "https://192.168.50.96:8443/healthz": dial tcp 192.168.50.96:8443: connect: connection refused
	I0127 03:01:32.634668  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:01:32.634747  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:01:32.674382  946842 cri.go:89] found id: "ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922"
	I0127 03:01:32.674413  946842 cri.go:89] found id: ""
	I0127 03:01:32.674424  946842 logs.go:282] 1 containers: [ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922]
	I0127 03:01:32.674493  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:32.678639  946842 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:01:32.678731  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:01:32.713981  946842 cri.go:89] found id: "b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0"
	I0127 03:01:32.714016  946842 cri.go:89] found id: ""
	I0127 03:01:32.714027  946842 logs.go:282] 1 containers: [b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0]
	I0127 03:01:32.714096  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:32.719350  946842 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:01:32.719435  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:01:32.760289  946842 cri.go:89] found id: "3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef"
	I0127 03:01:32.760316  946842 cri.go:89] found id: "de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e"
	I0127 03:01:32.760320  946842 cri.go:89] found id: "9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c"
	I0127 03:01:32.760323  946842 cri.go:89] found id: "6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e"
	I0127 03:01:32.760326  946842 cri.go:89] found id: ""
	I0127 03:01:32.760333  946842 logs.go:282] 4 containers: [3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e 9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c 6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e]
	I0127 03:01:32.760384  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:32.764876  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:32.769176  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:32.772879  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:32.776443  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:01:32.776509  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:01:32.811439  946842 cri.go:89] found id: "a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a"
	I0127 03:01:32.811471  946842 cri.go:89] found id: "627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303"
	I0127 03:01:32.811477  946842 cri.go:89] found id: ""
	I0127 03:01:32.811485  946842 logs.go:282] 2 containers: [a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a 627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303]
	I0127 03:01:32.811548  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:32.815485  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:32.819139  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:01:32.819220  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:01:32.852730  946842 cri.go:89] found id: ""
	I0127 03:01:32.852767  946842 logs.go:282] 0 containers: []
	W0127 03:01:32.852780  946842 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:01:32.852787  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:01:32.852857  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:01:32.897496  946842 cri.go:89] found id: "da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e"
	I0127 03:01:32.897523  946842 cri.go:89] found id: ""
	I0127 03:01:32.897532  946842 logs.go:282] 1 containers: [da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e]
	I0127 03:01:32.897584  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:32.901794  946842 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:01:32.901964  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:01:32.937000  946842 cri.go:89] found id: ""
	I0127 03:01:32.937037  946842 logs.go:282] 0 containers: []
	W0127 03:01:32.937049  946842 logs.go:284] No container was found matching "kindnet"
	I0127 03:01:32.937057  946842 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0127 03:01:32.937132  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0127 03:01:32.982518  946842 cri.go:89] found id: "2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641"
	I0127 03:01:32.982555  946842 cri.go:89] found id: ""
	I0127 03:01:32.982566  946842 logs.go:282] 1 containers: [2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641]
	I0127 03:01:32.982640  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:32.986914  946842 logs.go:123] Gathering logs for kube-controller-manager [da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e] ...
	I0127 03:01:32.986948  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e"
	I0127 03:01:33.042209  946842 logs.go:123] Gathering logs for kubelet ...
	I0127 03:01:33.042252  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:01:33.149369  946842 logs.go:123] Gathering logs for coredns [3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef] ...
	I0127 03:01:33.149423  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef"
	I0127 03:01:33.198656  946842 logs.go:123] Gathering logs for kube-scheduler [a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a] ...
	I0127 03:01:33.198718  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a"
	I0127 03:01:33.292129  946842 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:01:33.292185  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:01:33.378489  946842 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:01:33.378517  946842 logs.go:123] Gathering logs for kube-apiserver [ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922] ...
	I0127 03:01:33.378534  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922"
	I0127 03:01:33.427109  946842 logs.go:123] Gathering logs for kube-scheduler [627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303] ...
	I0127 03:01:33.427147  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303"
	I0127 03:01:33.481692  946842 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:01:33.481731  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:01:33.920138  946842 logs.go:123] Gathering logs for container status ...
	I0127 03:01:33.920191  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:01:33.975058  946842 logs.go:123] Gathering logs for coredns [de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e] ...
	I0127 03:01:33.975095  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e"
	I0127 03:01:34.028709  946842 logs.go:123] Gathering logs for coredns [9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c] ...
	I0127 03:01:34.028749  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c"
	I0127 03:01:34.068381  946842 logs.go:123] Gathering logs for coredns [6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e] ...
	I0127 03:01:34.068418  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e"
	I0127 03:01:34.104324  946842 logs.go:123] Gathering logs for dmesg ...
	I0127 03:01:34.104360  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:01:34.120541  946842 logs.go:123] Gathering logs for etcd [b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0] ...
	I0127 03:01:34.120573  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0"
	I0127 03:01:34.164597  946842 logs.go:123] Gathering logs for storage-provisioner [2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641] ...
	I0127 03:01:34.164650  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641"
	I0127 03:01:33.056194  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:33.074196  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:01:33.074272  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:01:33.119152  948597 cri.go:89] found id: ""
	I0127 03:01:33.119190  948597 logs.go:282] 0 containers: []
	W0127 03:01:33.119202  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:01:33.119211  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:01:33.119281  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:01:33.165100  948597 cri.go:89] found id: ""
	I0127 03:01:33.165137  948597 logs.go:282] 0 containers: []
	W0127 03:01:33.165150  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:01:33.165159  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:01:33.165253  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:01:33.205774  948597 cri.go:89] found id: ""
	I0127 03:01:33.205826  948597 logs.go:282] 0 containers: []
	W0127 03:01:33.205840  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:01:33.205851  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:01:33.205935  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:01:33.253573  948597 cri.go:89] found id: ""
	I0127 03:01:33.253607  948597 logs.go:282] 0 containers: []
	W0127 03:01:33.253618  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:01:33.253627  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:01:33.253695  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:01:33.299536  948597 cri.go:89] found id: ""
	I0127 03:01:33.299573  948597 logs.go:282] 0 containers: []
	W0127 03:01:33.299585  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:01:33.299592  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:01:33.299661  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:01:33.344784  948597 cri.go:89] found id: ""
	I0127 03:01:33.344820  948597 logs.go:282] 0 containers: []
	W0127 03:01:33.344831  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:01:33.344840  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:01:33.344908  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:01:33.391564  948597 cri.go:89] found id: ""
	I0127 03:01:33.391600  948597 logs.go:282] 0 containers: []
	W0127 03:01:33.391611  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:01:33.391620  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:01:33.391714  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:01:33.441344  948597 cri.go:89] found id: ""
	I0127 03:01:33.441377  948597 logs.go:282] 0 containers: []
	W0127 03:01:33.441388  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:01:33.441401  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:01:33.441415  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:01:33.516970  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:01:33.517022  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:01:33.535279  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:01:33.535313  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:01:33.617985  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:01:33.618013  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:01:33.618032  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:01:33.715673  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:01:33.715739  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:01:36.260552  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:36.279190  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:01:36.279290  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:01:36.337183  948597 cri.go:89] found id: ""
	I0127 03:01:36.337220  948597 logs.go:282] 0 containers: []
	W0127 03:01:36.337232  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:01:36.337241  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:01:36.337310  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:01:36.384558  948597 cri.go:89] found id: ""
	I0127 03:01:36.384596  948597 logs.go:282] 0 containers: []
	W0127 03:01:36.384608  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:01:36.384617  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:01:36.384686  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:01:36.439591  948597 cri.go:89] found id: ""
	I0127 03:01:36.439622  948597 logs.go:282] 0 containers: []
	W0127 03:01:36.439633  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:01:36.439642  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:01:36.439713  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:01:36.484358  948597 cri.go:89] found id: ""
	I0127 03:01:36.484395  948597 logs.go:282] 0 containers: []
	W0127 03:01:36.484412  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:01:36.484420  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:01:36.484496  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:01:36.527632  948597 cri.go:89] found id: ""
	I0127 03:01:36.527665  948597 logs.go:282] 0 containers: []
	W0127 03:01:36.527676  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:01:36.527684  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:01:36.527750  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:01:36.568669  948597 cri.go:89] found id: ""
	I0127 03:01:36.568707  948597 logs.go:282] 0 containers: []
	W0127 03:01:36.568720  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:01:36.568729  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:01:36.568801  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:01:36.605428  948597 cri.go:89] found id: ""
	I0127 03:01:36.605459  948597 logs.go:282] 0 containers: []
	W0127 03:01:36.605468  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:01:36.605478  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:01:36.605550  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:01:36.645714  948597 cri.go:89] found id: ""
	I0127 03:01:36.645745  948597 logs.go:282] 0 containers: []
	W0127 03:01:36.645754  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:01:36.645766  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:01:36.645781  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:01:36.731365  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:01:36.731403  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:01:36.731419  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:01:33.572886  949037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:33.594161  949037 api_server.go:72] duration metric: took 1.521596425s to wait for apiserver process to appear ...
	I0127 03:01:33.594199  949037 api_server.go:88] waiting for apiserver healthz status ...
	I0127 03:01:33.594225  949037 api_server.go:253] Checking apiserver healthz at https://192.168.61.190:8443/healthz ...
	I0127 03:01:36.354413  949037 api_server.go:279] https://192.168.61.190:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 03:01:36.354456  949037 api_server.go:103] status: https://192.168.61.190:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 03:01:36.354485  949037 api_server.go:253] Checking apiserver healthz at https://192.168.61.190:8443/healthz ...
	I0127 03:01:36.434930  949037 api_server.go:279] https://192.168.61.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 03:01:36.434973  949037 api_server.go:103] status: https://192.168.61.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 03:01:36.595356  949037 api_server.go:253] Checking apiserver healthz at https://192.168.61.190:8443/healthz ...
	I0127 03:01:36.603510  949037 api_server.go:279] https://192.168.61.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 03:01:36.603543  949037 api_server.go:103] status: https://192.168.61.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 03:01:37.095212  949037 api_server.go:253] Checking apiserver healthz at https://192.168.61.190:8443/healthz ...
	I0127 03:01:37.100287  949037 api_server.go:279] https://192.168.61.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 03:01:37.100335  949037 api_server.go:103] status: https://192.168.61.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 03:01:37.595019  949037 api_server.go:253] Checking apiserver healthz at https://192.168.61.190:8443/healthz ...
	I0127 03:01:37.601363  949037 api_server.go:279] https://192.168.61.190:8443/healthz returned 200:
	ok
	I0127 03:01:37.612679  949037 api_server.go:141] control plane version: v1.32.1
	I0127 03:01:37.612721  949037 api_server.go:131] duration metric: took 4.018511968s to wait for apiserver health ...
	I0127 03:01:37.612735  949037 cni.go:84] Creating CNI manager for ""
	I0127 03:01:37.612744  949037 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 03:01:37.614136  949037 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 03:01:37.615212  949037 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 03:01:37.650014  949037 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 03:01:37.682982  949037 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 03:01:37.700681  949037 system_pods.go:59] 8 kube-system pods found
	I0127 03:01:37.700730  949037 system_pods.go:61] "coredns-668d6bf9bc-clwqr" [a6b88987-b816-40c7-88cf-ac07b4e866fc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 03:01:37.700740  949037 system_pods.go:61] "etcd-embed-certs-896179" [5bf6b759-f304-4249-aa10-f450befd6b9a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 03:01:37.700748  949037 system_pods.go:61] "kube-apiserver-embed-certs-896179" [9b92e899-1aa8-4c3a-8376-59f04ddb8afb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 03:01:37.700755  949037 system_pods.go:61] "kube-controller-manager-embed-certs-896179" [5a27296d-156e-43d9-9fd0-edd87211fd4f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 03:01:37.700764  949037 system_pods.go:61] "kube-proxy-bvqkk" [8599cc91-ec3b-4c01-9200-91c7fcf29dc9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0127 03:01:37.700770  949037 system_pods.go:61] "kube-scheduler-embed-certs-896179" [1cb4be20-3256-4396-a7d8-9bea72a04ca3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 03:01:37.700775  949037 system_pods.go:61] "metrics-server-f79f97bbb-2bcfv" [b797b1f1-23f2-46d0-8568-684978d6af75] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 03:01:37.700783  949037 system_pods.go:61] "storage-provisioner" [54a77526-9f20-4cb7-aeeb-96de6106a45a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0127 03:01:37.700790  949037 system_pods.go:74] duration metric: took 17.780549ms to wait for pod list to return data ...
	I0127 03:01:37.700801  949037 node_conditions.go:102] verifying NodePressure condition ...
	I0127 03:01:37.705644  949037 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 03:01:37.705681  949037 node_conditions.go:123] node cpu capacity is 2
	I0127 03:01:37.705714  949037 node_conditions.go:105] duration metric: took 4.906642ms to run NodePressure ...
	I0127 03:01:37.705738  949037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 03:01:38.058223  949037 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0127 03:01:38.064654  949037 kubeadm.go:739] kubelet initialised
	I0127 03:01:38.064690  949037 kubeadm.go:740] duration metric: took 6.432649ms waiting for restarted kubelet to initialise ...
	I0127 03:01:38.064704  949037 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:01:38.070440  949037 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-clwqr" in "kube-system" namespace to be "Ready" ...
	I0127 03:01:38.077489  949037 pod_ready.go:98] node "embed-certs-896179" hosting pod "coredns-668d6bf9bc-clwqr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-896179" has status "Ready":"False"
	I0127 03:01:38.077519  949037 pod_ready.go:82] duration metric: took 7.040368ms for pod "coredns-668d6bf9bc-clwqr" in "kube-system" namespace to be "Ready" ...
	E0127 03:01:38.077529  949037 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-896179" hosting pod "coredns-668d6bf9bc-clwqr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-896179" has status "Ready":"False"
	I0127 03:01:38.077536  949037 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-896179" in "kube-system" namespace to be "Ready" ...
	I0127 03:01:38.084647  949037 pod_ready.go:98] node "embed-certs-896179" hosting pod "etcd-embed-certs-896179" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-896179" has status "Ready":"False"
	I0127 03:01:38.084675  949037 pod_ready.go:82] duration metric: took 7.130619ms for pod "etcd-embed-certs-896179" in "kube-system" namespace to be "Ready" ...
	E0127 03:01:38.084686  949037 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-896179" hosting pod "etcd-embed-certs-896179" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-896179" has status "Ready":"False"
	I0127 03:01:38.084693  949037 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-896179" in "kube-system" namespace to be "Ready" ...
	I0127 03:01:38.092453  949037 pod_ready.go:98] node "embed-certs-896179" hosting pod "kube-apiserver-embed-certs-896179" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-896179" has status "Ready":"False"
	I0127 03:01:38.092497  949037 pod_ready.go:82] duration metric: took 7.795405ms for pod "kube-apiserver-embed-certs-896179" in "kube-system" namespace to be "Ready" ...
	E0127 03:01:38.092511  949037 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-896179" hosting pod "kube-apiserver-embed-certs-896179" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-896179" has status "Ready":"False"
	I0127 03:01:38.092521  949037 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-896179" in "kube-system" namespace to be "Ready" ...
	I0127 03:01:38.100127  949037 pod_ready.go:98] node "embed-certs-896179" hosting pod "kube-controller-manager-embed-certs-896179" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-896179" has status "Ready":"False"
	I0127 03:01:38.100162  949037 pod_ready.go:82] duration metric: took 7.629553ms for pod "kube-controller-manager-embed-certs-896179" in "kube-system" namespace to be "Ready" ...
	E0127 03:01:38.100177  949037 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-896179" hosting pod "kube-controller-manager-embed-certs-896179" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-896179" has status "Ready":"False"
	I0127 03:01:38.100184  949037 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-bvqkk" in "kube-system" namespace to be "Ready" ...
	I0127 03:01:38.487510  949037 pod_ready.go:98] node "embed-certs-896179" hosting pod "kube-proxy-bvqkk" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-896179" has status "Ready":"False"
	I0127 03:01:38.487538  949037 pod_ready.go:82] duration metric: took 387.344641ms for pod "kube-proxy-bvqkk" in "kube-system" namespace to be "Ready" ...
	E0127 03:01:38.487549  949037 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-896179" hosting pod "kube-proxy-bvqkk" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-896179" has status "Ready":"False"
	I0127 03:01:38.487558  949037 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-896179" in "kube-system" namespace to be "Ready" ...
	I0127 03:01:38.886530  949037 pod_ready.go:98] node "embed-certs-896179" hosting pod "kube-scheduler-embed-certs-896179" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-896179" has status "Ready":"False"
	I0127 03:01:38.886562  949037 pod_ready.go:82] duration metric: took 398.995494ms for pod "kube-scheduler-embed-certs-896179" in "kube-system" namespace to be "Ready" ...
	E0127 03:01:38.886572  949037 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-896179" hosting pod "kube-scheduler-embed-certs-896179" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-896179" has status "Ready":"False"
	I0127 03:01:38.886583  949037 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-2bcfv" in "kube-system" namespace to be "Ready" ...
	I0127 03:01:39.286785  949037 pod_ready.go:98] node "embed-certs-896179" hosting pod "metrics-server-f79f97bbb-2bcfv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-896179" has status "Ready":"False"
	I0127 03:01:39.286818  949037 pod_ready.go:82] duration metric: took 400.225235ms for pod "metrics-server-f79f97bbb-2bcfv" in "kube-system" namespace to be "Ready" ...
	E0127 03:01:39.286828  949037 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-896179" hosting pod "metrics-server-f79f97bbb-2bcfv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-896179" has status "Ready":"False"
	I0127 03:01:39.286836  949037 pod_ready.go:39] duration metric: took 1.222120516s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:01:39.286857  949037 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 03:01:39.298928  949037 ops.go:34] apiserver oom_adj: -16
	I0127 03:01:39.298961  949037 kubeadm.go:597] duration metric: took 8.73817034s to restartPrimaryControlPlane
	I0127 03:01:39.298974  949037 kubeadm.go:394] duration metric: took 8.795220083s to StartCluster
	I0127 03:01:39.298994  949037 settings.go:142] acquiring lock: {Name:mk329e04da29656a59534015ea2d4cccfe5debac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:01:39.299086  949037 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20316-897624/kubeconfig
	I0127 03:01:39.300466  949037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/kubeconfig: {Name:mkcd87672341028e989830590061bc5270733978 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:01:39.300729  949037 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.190 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 03:01:39.300813  949037 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 03:01:39.300934  949037 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-896179"
	I0127 03:01:39.300955  949037 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-896179"
	I0127 03:01:39.300960  949037 addons.go:69] Setting default-storageclass=true in profile "embed-certs-896179"
	I0127 03:01:39.300987  949037 addons.go:69] Setting dashboard=true in profile "embed-certs-896179"
	I0127 03:01:39.301007  949037 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-896179"
	I0127 03:01:39.301029  949037 addons.go:238] Setting addon dashboard=true in "embed-certs-896179"
	W0127 03:01:39.300965  949037 addons.go:247] addon storage-provisioner should already be in state true
	I0127 03:01:39.301123  949037 host.go:66] Checking if "embed-certs-896179" exists ...
	I0127 03:01:39.300996  949037 config.go:182] Loaded profile config "embed-certs-896179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 03:01:39.300991  949037 addons.go:69] Setting metrics-server=true in profile "embed-certs-896179"
	I0127 03:01:39.301185  949037 addons.go:238] Setting addon metrics-server=true in "embed-certs-896179"
	W0127 03:01:39.301197  949037 addons.go:247] addon metrics-server should already be in state true
	W0127 03:01:39.301044  949037 addons.go:247] addon dashboard should already be in state true
	I0127 03:01:39.301235  949037 host.go:66] Checking if "embed-certs-896179" exists ...
	I0127 03:01:39.301333  949037 host.go:66] Checking if "embed-certs-896179" exists ...
	I0127 03:01:39.301522  949037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:01:39.301572  949037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:01:39.301575  949037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:01:39.301620  949037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:01:39.301653  949037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:01:39.301691  949037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:01:39.301700  949037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:01:39.301750  949037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:01:39.302400  949037 out.go:177] * Verifying Kubernetes components...
	I0127 03:01:39.303807  949037 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 03:01:39.318030  949037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36607
	I0127 03:01:39.318050  949037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34495
	I0127 03:01:39.318045  949037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38755
	I0127 03:01:39.318513  949037 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:01:39.318570  949037 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:01:39.318699  949037 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:01:39.319058  949037 main.go:141] libmachine: Using API Version  1
	I0127 03:01:39.319079  949037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:01:39.319192  949037 main.go:141] libmachine: Using API Version  1
	I0127 03:01:39.319209  949037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:01:39.319211  949037 main.go:141] libmachine: Using API Version  1
	I0127 03:01:39.319232  949037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:01:39.319467  949037 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:01:39.319526  949037 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:01:39.319543  949037 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:01:39.320080  949037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:01:39.320129  949037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:01:39.320327  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetState
	I0127 03:01:39.320783  949037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:01:39.320824  949037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:01:39.321870  949037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42251
	I0127 03:01:39.322382  949037 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:01:39.323142  949037 main.go:141] libmachine: Using API Version  1
	I0127 03:01:39.323163  949037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:01:39.324813  949037 addons.go:238] Setting addon default-storageclass=true in "embed-certs-896179"
	W0127 03:01:39.324826  949037 addons.go:247] addon default-storageclass should already be in state true
	I0127 03:01:39.324849  949037 host.go:66] Checking if "embed-certs-896179" exists ...
	I0127 03:01:39.325047  949037 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:01:39.325204  949037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:01:39.325243  949037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:01:39.325645  949037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:01:39.325689  949037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:01:39.338271  949037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46187
	I0127 03:01:39.338860  949037 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:01:39.339434  949037 main.go:141] libmachine: Using API Version  1
	I0127 03:01:39.339455  949037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:01:39.339780  949037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39031
	I0127 03:01:39.340005  949037 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:01:39.340203  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetState
	I0127 03:01:39.340294  949037 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:01:39.340696  949037 main.go:141] libmachine: Using API Version  1
	I0127 03:01:39.340713  949037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:01:39.341113  949037 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:01:39.341324  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetState
	I0127 03:01:39.342535  949037 main.go:141] libmachine: (embed-certs-896179) Calling .DriverName
	I0127 03:01:39.343364  949037 main.go:141] libmachine: (embed-certs-896179) Calling .DriverName
	I0127 03:01:39.344678  949037 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 03:01:39.345673  949037 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 03:01:39.345920  949037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34987
	I0127 03:01:39.346613  949037 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 03:01:39.346628  949037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 03:01:39.346644  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHHostname
	I0127 03:01:39.346698  949037 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:01:39.347487  949037 main.go:141] libmachine: Using API Version  1
	I0127 03:01:39.347517  949037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:01:39.348187  949037 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:01:39.349012  949037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:01:39.349132  949037 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 03:01:39.349146  949037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:01:39.349831  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has defined MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:39.350264  949037 main.go:141] libmachine: (embed-certs-896179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:51:90", ip: ""} in network mk-embed-certs-896179: {Iface:virbr3 ExpiryTime:2025-01-27 04:01:14 +0000 UTC Type:0 Mac:52:54:00:23:51:90 Iaid: IPaddr:192.168.61.190 Prefix:24 Hostname:embed-certs-896179 Clientid:01:52:54:00:23:51:90}
	I0127 03:01:39.350320  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has defined IP address 192.168.61.190 and MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:39.350453  949037 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 03:01:39.350467  949037 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 03:01:39.350468  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHPort
	I0127 03:01:39.350487  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHHostname
	I0127 03:01:39.350666  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHKeyPath
	I0127 03:01:39.350796  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHUsername
	I0127 03:01:39.351200  949037 sshutil.go:53] new ssh client: &{IP:192.168.61.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/embed-certs-896179/id_rsa Username:docker}
	I0127 03:01:39.352410  949037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38009
	I0127 03:01:39.353541  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has defined MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:39.353895  949037 main.go:141] libmachine: (embed-certs-896179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:51:90", ip: ""} in network mk-embed-certs-896179: {Iface:virbr3 ExpiryTime:2025-01-27 04:01:14 +0000 UTC Type:0 Mac:52:54:00:23:51:90 Iaid: IPaddr:192.168.61.190 Prefix:24 Hostname:embed-certs-896179 Clientid:01:52:54:00:23:51:90}
	I0127 03:01:39.353939  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has defined IP address 192.168.61.190 and MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:39.354109  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHPort
	I0127 03:01:39.354287  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHKeyPath
	I0127 03:01:39.354441  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHUsername
	I0127 03:01:39.354562  949037 sshutil.go:53] new ssh client: &{IP:192.168.61.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/embed-certs-896179/id_rsa Username:docker}
	I0127 03:01:39.377631  949037 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:01:39.378255  949037 main.go:141] libmachine: Using API Version  1
	I0127 03:01:39.378287  949037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:01:39.378757  949037 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:01:39.379053  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetState
	I0127 03:01:39.381052  949037 main.go:141] libmachine: (embed-certs-896179) Calling .DriverName
	I0127 03:01:39.383156  949037 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 03:01:36.241605  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:38.242178  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:36.707264  946842 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0127 03:01:36.708274  946842 api_server.go:269] stopped: https://192.168.50.96:8443/healthz: Get "https://192.168.50.96:8443/healthz": dial tcp 192.168.50.96:8443: connect: connection refused
	I0127 03:01:36.708351  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:01:36.708412  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:01:36.757305  946842 cri.go:89] found id: "ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922"
	I0127 03:01:36.757385  946842 cri.go:89] found id: ""
	I0127 03:01:36.757401  946842 logs.go:282] 1 containers: [ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922]
	I0127 03:01:36.757472  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:36.762272  946842 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:01:36.762349  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:01:36.808058  946842 cri.go:89] found id: "b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0"
	I0127 03:01:36.808091  946842 cri.go:89] found id: ""
	I0127 03:01:36.808102  946842 logs.go:282] 1 containers: [b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0]
	I0127 03:01:36.808170  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:36.812591  946842 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:01:36.812679  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:01:36.854960  946842 cri.go:89] found id: "3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef"
	I0127 03:01:36.854986  946842 cri.go:89] found id: "de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e"
	I0127 03:01:36.854990  946842 cri.go:89] found id: "9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c"
	I0127 03:01:36.854993  946842 cri.go:89] found id: "6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e"
	I0127 03:01:36.854996  946842 cri.go:89] found id: ""
	I0127 03:01:36.855006  946842 logs.go:282] 4 containers: [3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e 9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c 6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e]
	I0127 03:01:36.855070  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:36.859731  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:36.863697  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:36.868053  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:36.872022  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:01:36.872102  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:01:36.910969  946842 cri.go:89] found id: "a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a"
	I0127 03:01:36.910999  946842 cri.go:89] found id: "627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303"
	I0127 03:01:36.911005  946842 cri.go:89] found id: ""
	I0127 03:01:36.911015  946842 logs.go:282] 2 containers: [a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a 627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303]
	I0127 03:01:36.911077  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:36.915542  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:36.920033  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:01:36.920108  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:01:36.959315  946842 cri.go:89] found id: ""
	I0127 03:01:36.959348  946842 logs.go:282] 0 containers: []
	W0127 03:01:36.959360  946842 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:01:36.959368  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:01:36.959433  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:01:37.001393  946842 cri.go:89] found id: "da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e"
	I0127 03:01:37.001417  946842 cri.go:89] found id: ""
	I0127 03:01:37.001428  946842 logs.go:282] 1 containers: [da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e]
	I0127 03:01:37.001477  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:37.005893  946842 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:01:37.005957  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:01:37.049492  946842 cri.go:89] found id: ""
	I0127 03:01:37.049522  946842 logs.go:282] 0 containers: []
	W0127 03:01:37.049531  946842 logs.go:284] No container was found matching "kindnet"
	I0127 03:01:37.049537  946842 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0127 03:01:37.049603  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0127 03:01:37.083310  946842 cri.go:89] found id: "2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641"
	I0127 03:01:37.083334  946842 cri.go:89] found id: ""
	I0127 03:01:37.083343  946842 logs.go:282] 1 containers: [2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641]
	I0127 03:01:37.083396  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:37.087417  946842 logs.go:123] Gathering logs for coredns [3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef] ...
	I0127 03:01:37.087445  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef"
	I0127 03:01:37.139546  946842 logs.go:123] Gathering logs for kube-controller-manager [da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e] ...
	I0127 03:01:37.139587  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e"
	I0127 03:01:37.190110  946842 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:01:37.190148  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:01:37.566425  946842 logs.go:123] Gathering logs for kubelet ...
	I0127 03:01:37.566467  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:01:37.690468  946842 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:01:37.690503  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:01:37.777271  946842 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:01:37.777302  946842 logs.go:123] Gathering logs for kube-apiserver [ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922] ...
	I0127 03:01:37.777321  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922"
	I0127 03:01:37.822178  946842 logs.go:123] Gathering logs for kube-scheduler [a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a] ...
	I0127 03:01:37.822222  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a"
	I0127 03:01:37.906973  946842 logs.go:123] Gathering logs for kube-scheduler [627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303] ...
	I0127 03:01:37.907030  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303"
	I0127 03:01:37.946621  946842 logs.go:123] Gathering logs for container status ...
	I0127 03:01:37.946659  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:01:37.993533  946842 logs.go:123] Gathering logs for dmesg ...
	I0127 03:01:37.993578  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:01:38.008351  946842 logs.go:123] Gathering logs for coredns [de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e] ...
	I0127 03:01:38.008386  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e"
	I0127 03:01:38.055327  946842 logs.go:123] Gathering logs for coredns [6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e] ...
	I0127 03:01:38.055386  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e"
	I0127 03:01:38.095824  946842 logs.go:123] Gathering logs for coredns [9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c] ...
	I0127 03:01:38.095879  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c"
	I0127 03:01:38.142315  946842 logs.go:123] Gathering logs for storage-provisioner [2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641] ...
	I0127 03:01:38.142347  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641"
	I0127 03:01:38.184142  946842 logs.go:123] Gathering logs for etcd [b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0] ...
	I0127 03:01:38.184174  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0"
	I0127 03:01:39.384376  949037 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 03:01:39.384396  949037 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 03:01:39.384419  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHHostname
	I0127 03:01:39.387858  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has defined MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:39.388347  949037 main.go:141] libmachine: (embed-certs-896179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:51:90", ip: ""} in network mk-embed-certs-896179: {Iface:virbr3 ExpiryTime:2025-01-27 04:01:14 +0000 UTC Type:0 Mac:52:54:00:23:51:90 Iaid: IPaddr:192.168.61.190 Prefix:24 Hostname:embed-certs-896179 Clientid:01:52:54:00:23:51:90}
	I0127 03:01:39.388387  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has defined IP address 192.168.61.190 and MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:39.388602  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHPort
	I0127 03:01:39.388774  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHKeyPath
	I0127 03:01:39.388955  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHUsername
	I0127 03:01:39.389126  949037 sshutil.go:53] new ssh client: &{IP:192.168.61.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/embed-certs-896179/id_rsa Username:docker}
	I0127 03:01:39.396901  949037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43127
	I0127 03:01:39.397660  949037 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:01:39.398261  949037 main.go:141] libmachine: Using API Version  1
	I0127 03:01:39.398286  949037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:01:39.398659  949037 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:01:39.398926  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetState
	I0127 03:01:39.400683  949037 main.go:141] libmachine: (embed-certs-896179) Calling .DriverName
	I0127 03:01:39.400908  949037 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 03:01:39.400940  949037 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 03:01:39.400962  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHHostname
	I0127 03:01:39.405518  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has defined MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:39.406102  949037 main.go:141] libmachine: (embed-certs-896179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:51:90", ip: ""} in network mk-embed-certs-896179: {Iface:virbr3 ExpiryTime:2025-01-27 04:01:14 +0000 UTC Type:0 Mac:52:54:00:23:51:90 Iaid: IPaddr:192.168.61.190 Prefix:24 Hostname:embed-certs-896179 Clientid:01:52:54:00:23:51:90}
	I0127 03:01:39.406139  949037 main.go:141] libmachine: (embed-certs-896179) DBG | domain embed-certs-896179 has defined IP address 192.168.61.190 and MAC address 52:54:00:23:51:90 in network mk-embed-certs-896179
	I0127 03:01:39.406257  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHPort
	I0127 03:01:39.406491  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHKeyPath
	I0127 03:01:39.406681  949037 main.go:141] libmachine: (embed-certs-896179) Calling .GetSSHUsername
	I0127 03:01:39.406835  949037 sshutil.go:53] new ssh client: &{IP:192.168.61.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/embed-certs-896179/id_rsa Username:docker}
	I0127 03:01:39.502245  949037 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 03:01:39.523200  949037 node_ready.go:35] waiting up to 6m0s for node "embed-certs-896179" to be "Ready" ...
	I0127 03:01:39.590537  949037 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 03:01:39.590576  949037 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 03:01:39.616767  949037 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 03:01:39.616800  949037 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 03:01:39.674088  949037 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 03:01:39.674119  949037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 03:01:39.688900  949037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 03:01:39.693430  949037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 03:01:39.745432  949037 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 03:01:39.745462  949037 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 03:01:39.750502  949037 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 03:01:39.750536  949037 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 03:01:39.829457  949037 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 03:01:39.829487  949037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 03:01:39.838842  949037 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 03:01:39.838875  949037 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 03:01:39.920684  949037 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 03:01:39.920716  949037 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 03:01:39.923718  949037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 03:01:39.985910  949037 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 03:01:39.985945  949037 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 03:01:40.111820  949037 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 03:01:40.111849  949037 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 03:01:40.181964  949037 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 03:01:40.181991  949037 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 03:01:40.285023  949037 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 03:01:40.285053  949037 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 03:01:40.349024  949037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 03:01:41.279814  949037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.586254969s)
	I0127 03:01:41.279891  949037 main.go:141] libmachine: Making call to close driver server
	I0127 03:01:41.279902  949037 main.go:141] libmachine: (embed-certs-896179) Calling .Close
	I0127 03:01:41.280055  949037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.591101403s)
	I0127 03:01:41.280112  949037 main.go:141] libmachine: Making call to close driver server
	I0127 03:01:41.280124  949037 main.go:141] libmachine: (embed-certs-896179) Calling .Close
	I0127 03:01:41.280335  949037 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:01:41.280353  949037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:01:41.280363  949037 main.go:141] libmachine: Making call to close driver server
	I0127 03:01:41.280371  949037 main.go:141] libmachine: (embed-certs-896179) Calling .Close
	I0127 03:01:41.280505  949037 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:01:41.280525  949037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:01:41.280548  949037 main.go:141] libmachine: Making call to close driver server
	I0127 03:01:41.280556  949037 main.go:141] libmachine: (embed-certs-896179) Calling .Close
	I0127 03:01:41.282207  949037 main.go:141] libmachine: (embed-certs-896179) DBG | Closing plugin on server side
	I0127 03:01:41.282215  949037 main.go:141] libmachine: (embed-certs-896179) DBG | Closing plugin on server side
	I0127 03:01:41.282214  949037 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:01:41.282241  949037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:01:41.282566  949037 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:01:41.282581  949037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:01:41.300938  949037 main.go:141] libmachine: Making call to close driver server
	I0127 03:01:41.300969  949037 main.go:141] libmachine: (embed-certs-896179) Calling .Close
	I0127 03:01:41.301264  949037 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:01:41.301283  949037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:01:41.412413  949037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.488609112s)
	I0127 03:01:41.412482  949037 main.go:141] libmachine: Making call to close driver server
	I0127 03:01:41.412497  949037 main.go:141] libmachine: (embed-certs-896179) Calling .Close
	I0127 03:01:41.412849  949037 main.go:141] libmachine: (embed-certs-896179) DBG | Closing plugin on server side
	I0127 03:01:41.412895  949037 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:01:41.412903  949037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:01:41.412912  949037 main.go:141] libmachine: Making call to close driver server
	I0127 03:01:41.412919  949037 main.go:141] libmachine: (embed-certs-896179) Calling .Close
	I0127 03:01:41.414796  949037 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:01:41.414817  949037 main.go:141] libmachine: (embed-certs-896179) DBG | Closing plugin on server side
	I0127 03:01:41.414825  949037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:01:41.414841  949037 addons.go:479] Verifying addon metrics-server=true in "embed-certs-896179"
	I0127 03:01:41.534901  949037 node_ready.go:53] node "embed-certs-896179" has status "Ready":"False"
	I0127 03:01:41.719470  949037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.370374123s)
	I0127 03:01:41.719531  949037 main.go:141] libmachine: Making call to close driver server
	I0127 03:01:41.719546  949037 main.go:141] libmachine: (embed-certs-896179) Calling .Close
	I0127 03:01:41.719883  949037 main.go:141] libmachine: (embed-certs-896179) DBG | Closing plugin on server side
	I0127 03:01:41.719934  949037 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:01:41.719952  949037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:01:41.719970  949037 main.go:141] libmachine: Making call to close driver server
	I0127 03:01:41.719980  949037 main.go:141] libmachine: (embed-certs-896179) Calling .Close
	I0127 03:01:41.721491  949037 main.go:141] libmachine: (embed-certs-896179) DBG | Closing plugin on server side
	I0127 03:01:41.721541  949037 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:01:41.721549  949037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:01:41.723293  949037 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-896179 addons enable metrics-server
	
	I0127 03:01:41.724670  949037 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 03:01:36.814212  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:01:36.814254  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:01:36.856194  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:01:36.856233  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:01:36.916349  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:01:36.916381  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:01:39.436532  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:39.449140  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:01:39.449210  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:01:39.481787  948597 cri.go:89] found id: ""
	I0127 03:01:39.481818  948597 logs.go:282] 0 containers: []
	W0127 03:01:39.481827  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:01:39.481833  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:01:39.481914  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:01:39.518592  948597 cri.go:89] found id: ""
	I0127 03:01:39.518621  948597 logs.go:282] 0 containers: []
	W0127 03:01:39.518630  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:01:39.518636  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:01:39.518689  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:01:39.553944  948597 cri.go:89] found id: ""
	I0127 03:01:39.553981  948597 logs.go:282] 0 containers: []
	W0127 03:01:39.553991  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:01:39.553998  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:01:39.554065  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:01:39.592879  948597 cri.go:89] found id: ""
	I0127 03:01:39.592910  948597 logs.go:282] 0 containers: []
	W0127 03:01:39.592941  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:01:39.592951  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:01:39.593019  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:01:39.627918  948597 cri.go:89] found id: ""
	I0127 03:01:39.627957  948597 logs.go:282] 0 containers: []
	W0127 03:01:39.627969  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:01:39.627977  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:01:39.628048  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:01:39.672283  948597 cri.go:89] found id: ""
	I0127 03:01:39.672314  948597 logs.go:282] 0 containers: []
	W0127 03:01:39.672326  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:01:39.672334  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:01:39.672402  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:01:39.717676  948597 cri.go:89] found id: ""
	I0127 03:01:39.717715  948597 logs.go:282] 0 containers: []
	W0127 03:01:39.717729  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:01:39.717738  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:01:39.717816  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:01:39.769531  948597 cri.go:89] found id: ""
	I0127 03:01:39.769562  948597 logs.go:282] 0 containers: []
	W0127 03:01:39.769570  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:01:39.769580  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:01:39.769592  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:01:39.824255  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:01:39.824308  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:01:39.839595  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:01:39.839637  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:01:39.934427  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:01:39.934459  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:01:39.934475  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:01:40.029244  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:01:40.029287  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:01:41.725878  949037 addons.go:514] duration metric: took 2.425074367s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 03:01:40.741881  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:43.240794  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:40.730396  946842 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0127 03:01:40.731014  946842 api_server.go:269] stopped: https://192.168.50.96:8443/healthz: Get "https://192.168.50.96:8443/healthz": dial tcp 192.168.50.96:8443: connect: connection refused
	I0127 03:01:40.731094  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:01:40.731157  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:01:40.770835  946842 cri.go:89] found id: "ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922"
	I0127 03:01:40.770867  946842 cri.go:89] found id: ""
	I0127 03:01:40.770878  946842 logs.go:282] 1 containers: [ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922]
	I0127 03:01:40.770946  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:40.775225  946842 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:01:40.775307  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:01:40.817448  946842 cri.go:89] found id: "b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0"
	I0127 03:01:40.817478  946842 cri.go:89] found id: ""
	I0127 03:01:40.817489  946842 logs.go:282] 1 containers: [b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0]
	I0127 03:01:40.817550  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:40.821712  946842 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:01:40.821783  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:01:40.856774  946842 cri.go:89] found id: "3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef"
	I0127 03:01:40.856804  946842 cri.go:89] found id: "de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e"
	I0127 03:01:40.856810  946842 cri.go:89] found id: "9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c"
	I0127 03:01:40.856814  946842 cri.go:89] found id: "6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e"
	I0127 03:01:40.856818  946842 cri.go:89] found id: ""
	I0127 03:01:40.856828  946842 logs.go:282] 4 containers: [3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e 9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c 6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e]
	I0127 03:01:40.856895  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:40.862032  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:40.866878  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:40.871483  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:40.875718  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:01:40.875785  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:01:40.910569  946842 cri.go:89] found id: "a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a"
	I0127 03:01:40.910613  946842 cri.go:89] found id: "627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303"
	I0127 03:01:40.910620  946842 cri.go:89] found id: ""
	I0127 03:01:40.910631  946842 logs.go:282] 2 containers: [a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a 627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303]
	I0127 03:01:40.910702  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:40.917250  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:40.921019  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:01:40.921101  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:01:40.954812  946842 cri.go:89] found id: ""
	I0127 03:01:40.954849  946842 logs.go:282] 0 containers: []
	W0127 03:01:40.954862  946842 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:01:40.954871  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:01:40.954945  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:01:40.991569  946842 cri.go:89] found id: "da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e"
	I0127 03:01:40.991601  946842 cri.go:89] found id: ""
	I0127 03:01:40.991613  946842 logs.go:282] 1 containers: [da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e]
	I0127 03:01:40.991688  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:40.995768  946842 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:01:40.995848  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:01:41.029232  946842 cri.go:89] found id: ""
	I0127 03:01:41.029265  946842 logs.go:282] 0 containers: []
	W0127 03:01:41.029276  946842 logs.go:284] No container was found matching "kindnet"
	I0127 03:01:41.029284  946842 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0127 03:01:41.029357  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0127 03:01:41.064702  946842 cri.go:89] found id: "2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641"
	I0127 03:01:41.064737  946842 cri.go:89] found id: ""
	I0127 03:01:41.064748  946842 logs.go:282] 1 containers: [2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641]
	I0127 03:01:41.064813  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:41.069981  946842 logs.go:123] Gathering logs for dmesg ...
	I0127 03:01:41.070015  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:01:41.086272  946842 logs.go:123] Gathering logs for kube-apiserver [ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922] ...
	I0127 03:01:41.086319  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922"
	I0127 03:01:41.135846  946842 logs.go:123] Gathering logs for etcd [b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0] ...
	I0127 03:01:41.135883  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0"
	I0127 03:01:41.191372  946842 logs.go:123] Gathering logs for coredns [6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e] ...
	I0127 03:01:41.191424  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e"
	I0127 03:01:41.232154  946842 logs.go:123] Gathering logs for kube-controller-manager [da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e] ...
	I0127 03:01:41.232191  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e"
	I0127 03:01:41.295709  946842 logs.go:123] Gathering logs for coredns [3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef] ...
	I0127 03:01:41.295747  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef"
	I0127 03:01:41.356578  946842 logs.go:123] Gathering logs for coredns [9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c] ...
	I0127 03:01:41.356619  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c"
	I0127 03:01:41.394288  946842 logs.go:123] Gathering logs for kube-scheduler [a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a] ...
	I0127 03:01:41.394334  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a"
	I0127 03:01:41.480344  946842 logs.go:123] Gathering logs for container status ...
	I0127 03:01:41.480390  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:01:41.525734  946842 logs.go:123] Gathering logs for kubelet ...
	I0127 03:01:41.525784  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:01:41.637362  946842 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:01:41.637406  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:01:41.716073  946842 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:01:41.716102  946842 logs.go:123] Gathering logs for storage-provisioner [2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641] ...
	I0127 03:01:41.716122  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641"
	I0127 03:01:41.755415  946842 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:01:41.755445  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:01:42.083408  946842 logs.go:123] Gathering logs for coredns [de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e] ...
	I0127 03:01:42.083463  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e"
	I0127 03:01:42.130887  946842 logs.go:123] Gathering logs for kube-scheduler [627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303] ...
	I0127 03:01:42.130933  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303"
	I0127 03:01:44.668103  946842 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0127 03:01:44.668768  946842 api_server.go:269] stopped: https://192.168.50.96:8443/healthz: Get "https://192.168.50.96:8443/healthz": dial tcp 192.168.50.96:8443: connect: connection refused
	I0127 03:01:44.668838  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:01:44.668913  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:01:44.706056  946842 cri.go:89] found id: "ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922"
	I0127 03:01:44.706090  946842 cri.go:89] found id: ""
	I0127 03:01:44.706101  946842 logs.go:282] 1 containers: [ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922]
	I0127 03:01:44.706176  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:44.710180  946842 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:01:44.710281  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:01:44.748808  946842 cri.go:89] found id: "b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0"
	I0127 03:01:44.748839  946842 cri.go:89] found id: ""
	I0127 03:01:44.748847  946842 logs.go:282] 1 containers: [b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0]
	I0127 03:01:44.748909  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:44.753235  946842 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:01:44.753317  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:01:44.788449  946842 cri.go:89] found id: "3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef"
	I0127 03:01:44.788479  946842 cri.go:89] found id: "de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e"
	I0127 03:01:44.788483  946842 cri.go:89] found id: "9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c"
	I0127 03:01:44.788487  946842 cri.go:89] found id: "6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e"
	I0127 03:01:44.788491  946842 cri.go:89] found id: ""
	I0127 03:01:44.788499  946842 logs.go:282] 4 containers: [3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e 9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c 6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e]
	I0127 03:01:44.788554  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:44.792683  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:44.796277  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:44.799774  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:44.803575  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:01:44.803649  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:01:44.841157  946842 cri.go:89] found id: "a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a"
	I0127 03:01:44.841189  946842 cri.go:89] found id: "627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303"
	I0127 03:01:44.841195  946842 cri.go:89] found id: ""
	I0127 03:01:44.841206  946842 logs.go:282] 2 containers: [a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a 627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303]
	I0127 03:01:44.841273  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:44.845457  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:44.850262  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:01:44.850326  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:01:44.886053  946842 cri.go:89] found id: ""
	I0127 03:01:44.886086  946842 logs.go:282] 0 containers: []
	W0127 03:01:44.886100  946842 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:01:44.886108  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:01:44.886188  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:01:44.919722  946842 cri.go:89] found id: "da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e"
	I0127 03:01:44.919762  946842 cri.go:89] found id: ""
	I0127 03:01:44.919771  946842 logs.go:282] 1 containers: [da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e]
	I0127 03:01:44.919834  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:44.923727  946842 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:01:44.923794  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:01:44.966819  946842 cri.go:89] found id: ""
	I0127 03:01:44.966858  946842 logs.go:282] 0 containers: []
	W0127 03:01:44.966871  946842 logs.go:284] No container was found matching "kindnet"
	I0127 03:01:44.966879  946842 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0127 03:01:44.966948  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0127 03:01:45.001135  946842 cri.go:89] found id: "2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641"
	I0127 03:01:45.001171  946842 cri.go:89] found id: ""
	I0127 03:01:45.001182  946842 logs.go:282] 1 containers: [2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641]
	I0127 03:01:45.001254  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:45.005523  946842 logs.go:123] Gathering logs for kubelet ...
	I0127 03:01:45.005554  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:01:42.569345  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:42.581864  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:01:42.581947  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:01:42.615021  948597 cri.go:89] found id: ""
	I0127 03:01:42.615051  948597 logs.go:282] 0 containers: []
	W0127 03:01:42.615059  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:01:42.615065  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:01:42.615142  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:01:42.648856  948597 cri.go:89] found id: ""
	I0127 03:01:42.648889  948597 logs.go:282] 0 containers: []
	W0127 03:01:42.648897  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:01:42.648903  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:01:42.648979  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:01:42.680794  948597 cri.go:89] found id: ""
	I0127 03:01:42.680822  948597 logs.go:282] 0 containers: []
	W0127 03:01:42.680831  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:01:42.680838  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:01:42.680916  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:01:42.713381  948597 cri.go:89] found id: ""
	I0127 03:01:42.713421  948597 logs.go:282] 0 containers: []
	W0127 03:01:42.713433  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:01:42.713441  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:01:42.713511  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:01:42.746982  948597 cri.go:89] found id: ""
	I0127 03:01:42.747009  948597 logs.go:282] 0 containers: []
	W0127 03:01:42.747020  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:01:42.747026  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:01:42.747096  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:01:42.781132  948597 cri.go:89] found id: ""
	I0127 03:01:42.781161  948597 logs.go:282] 0 containers: []
	W0127 03:01:42.781169  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:01:42.781175  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:01:42.781227  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:01:42.814006  948597 cri.go:89] found id: ""
	I0127 03:01:42.814054  948597 logs.go:282] 0 containers: []
	W0127 03:01:42.814070  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:01:42.814078  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:01:42.814148  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:01:42.846896  948597 cri.go:89] found id: ""
	I0127 03:01:42.846924  948597 logs.go:282] 0 containers: []
	W0127 03:01:42.846932  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:01:42.846942  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:01:42.846955  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:01:42.887825  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:01:42.887860  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:01:42.936334  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:01:42.936382  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:01:42.949813  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:01:42.949856  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:01:43.018993  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:01:43.019020  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:01:43.019034  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:01:45.599348  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:45.613254  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:01:45.613351  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:01:45.649722  948597 cri.go:89] found id: ""
	I0127 03:01:45.649750  948597 logs.go:282] 0 containers: []
	W0127 03:01:45.649759  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:01:45.649765  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:01:45.649820  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:01:45.683304  948597 cri.go:89] found id: ""
	I0127 03:01:45.683337  948597 logs.go:282] 0 containers: []
	W0127 03:01:45.683358  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:01:45.683366  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:01:45.683433  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:01:45.720349  948597 cri.go:89] found id: ""
	I0127 03:01:45.720379  948597 logs.go:282] 0 containers: []
	W0127 03:01:45.720388  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:01:45.720393  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:01:45.720444  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:01:45.756037  948597 cri.go:89] found id: ""
	I0127 03:01:45.756066  948597 logs.go:282] 0 containers: []
	W0127 03:01:45.756077  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:01:45.756085  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:01:45.756152  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:01:45.789081  948597 cri.go:89] found id: ""
	I0127 03:01:45.789111  948597 logs.go:282] 0 containers: []
	W0127 03:01:45.789123  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:01:45.789132  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:01:45.789201  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:01:45.825809  948597 cri.go:89] found id: ""
	I0127 03:01:45.825841  948597 logs.go:282] 0 containers: []
	W0127 03:01:45.825852  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:01:45.825860  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:01:45.825923  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:01:45.859304  948597 cri.go:89] found id: ""
	I0127 03:01:45.859339  948597 logs.go:282] 0 containers: []
	W0127 03:01:45.859352  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:01:45.859360  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:01:45.859429  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:01:45.895925  948597 cri.go:89] found id: ""
	I0127 03:01:45.895959  948597 logs.go:282] 0 containers: []
	W0127 03:01:45.895971  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:01:45.895990  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:01:45.896006  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:01:45.910961  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:01:45.910995  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:01:45.982139  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:01:45.982173  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:01:45.982192  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:01:46.067354  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:01:46.067398  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:01:46.105325  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:01:46.105360  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:01:44.027494  949037 node_ready.go:53] node "embed-certs-896179" has status "Ready":"False"
	I0127 03:01:46.527119  949037 node_ready.go:53] node "embed-certs-896179" has status "Ready":"False"
	I0127 03:01:47.027019  949037 node_ready.go:49] node "embed-certs-896179" has status "Ready":"True"
	I0127 03:01:47.027071  949037 node_ready.go:38] duration metric: took 7.503819599s for node "embed-certs-896179" to be "Ready" ...
	I0127 03:01:47.027082  949037 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:01:47.031976  949037 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-clwqr" in "kube-system" namespace to be "Ready" ...
	I0127 03:01:47.041394  949037 pod_ready.go:93] pod "coredns-668d6bf9bc-clwqr" in "kube-system" namespace has status "Ready":"True"
	I0127 03:01:47.041418  949037 pod_ready.go:82] duration metric: took 9.415939ms for pod "coredns-668d6bf9bc-clwqr" in "kube-system" namespace to be "Ready" ...
	I0127 03:01:47.041427  949037 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-896179" in "kube-system" namespace to be "Ready" ...
	I0127 03:01:45.241555  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:47.752008  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:45.110246  946842 logs.go:123] Gathering logs for dmesg ...
	I0127 03:01:45.110292  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:01:45.127176  946842 logs.go:123] Gathering logs for coredns [3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef] ...
	I0127 03:01:45.127225  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef"
	I0127 03:01:45.176768  946842 logs.go:123] Gathering logs for kube-scheduler [a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a] ...
	I0127 03:01:45.176810  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a"
	I0127 03:01:45.260020  946842 logs.go:123] Gathering logs for kube-scheduler [627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303] ...
	I0127 03:01:45.260070  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303"
	I0127 03:01:45.297926  946842 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:01:45.297964  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:01:45.377945  946842 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:01:45.377974  946842 logs.go:123] Gathering logs for kube-apiserver [ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922] ...
	I0127 03:01:45.377987  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922"
	I0127 03:01:45.426059  946842 logs.go:123] Gathering logs for etcd [b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0] ...
	I0127 03:01:45.426096  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0"
	I0127 03:01:45.472396  946842 logs.go:123] Gathering logs for coredns [de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e] ...
	I0127 03:01:45.472432  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e"
	I0127 03:01:45.524778  946842 logs.go:123] Gathering logs for kube-controller-manager [da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e] ...
	I0127 03:01:45.524823  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e"
	I0127 03:01:45.573919  946842 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:01:45.573956  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:01:45.934248  946842 logs.go:123] Gathering logs for coredns [9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c] ...
	I0127 03:01:45.934292  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c"
	I0127 03:01:45.972345  946842 logs.go:123] Gathering logs for coredns [6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e] ...
	I0127 03:01:45.972382  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e"
	I0127 03:01:46.014293  946842 logs.go:123] Gathering logs for storage-provisioner [2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641] ...
	I0127 03:01:46.014324  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641"
	I0127 03:01:46.050135  946842 logs.go:123] Gathering logs for container status ...
	I0127 03:01:46.050169  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:01:48.595552  946842 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0127 03:01:48.596259  946842 api_server.go:269] stopped: https://192.168.50.96:8443/healthz: Get "https://192.168.50.96:8443/healthz": dial tcp 192.168.50.96:8443: connect: connection refused
	I0127 03:01:48.596331  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:01:48.596394  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:01:48.641485  946842 cri.go:89] found id: "ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922"
	I0127 03:01:48.641513  946842 cri.go:89] found id: ""
	I0127 03:01:48.641524  946842 logs.go:282] 1 containers: [ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922]
	I0127 03:01:48.641587  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:48.645628  946842 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:01:48.645688  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:01:48.682302  946842 cri.go:89] found id: "b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0"
	I0127 03:01:48.682332  946842 cri.go:89] found id: ""
	I0127 03:01:48.682347  946842 logs.go:282] 1 containers: [b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0]
	I0127 03:01:48.682414  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:48.687706  946842 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:01:48.687803  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:01:48.727866  946842 cri.go:89] found id: "3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef"
	I0127 03:01:48.727899  946842 cri.go:89] found id: "de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e"
	I0127 03:01:48.727906  946842 cri.go:89] found id: "9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c"
	I0127 03:01:48.727911  946842 cri.go:89] found id: "6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e"
	I0127 03:01:48.727915  946842 cri.go:89] found id: ""
	I0127 03:01:48.727927  946842 logs.go:282] 4 containers: [3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e 9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c 6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e]
	I0127 03:01:48.727994  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:48.733973  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:48.739044  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:48.744988  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:48.749638  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:01:48.749724  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:01:48.790459  946842 cri.go:89] found id: "a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a"
	I0127 03:01:48.790492  946842 cri.go:89] found id: "627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303"
	I0127 03:01:48.790497  946842 cri.go:89] found id: ""
	I0127 03:01:48.790511  946842 logs.go:282] 2 containers: [a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a 627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303]
	I0127 03:01:48.790576  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:48.795105  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:48.799846  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:01:48.799925  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:01:48.839848  946842 cri.go:89] found id: ""
	I0127 03:01:48.839881  946842 logs.go:282] 0 containers: []
	W0127 03:01:48.839890  946842 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:01:48.839897  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:01:48.839958  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:01:48.885093  946842 cri.go:89] found id: "da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e"
	I0127 03:01:48.885121  946842 cri.go:89] found id: ""
	I0127 03:01:48.885132  946842 logs.go:282] 1 containers: [da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e]
	I0127 03:01:48.885199  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:48.889389  946842 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:01:48.889453  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:01:48.929544  946842 cri.go:89] found id: ""
	I0127 03:01:48.929575  946842 logs.go:282] 0 containers: []
	W0127 03:01:48.929586  946842 logs.go:284] No container was found matching "kindnet"
	I0127 03:01:48.929594  946842 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0127 03:01:48.929661  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0127 03:01:48.973470  946842 cri.go:89] found id: "2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641"
	I0127 03:01:48.973503  946842 cri.go:89] found id: ""
	I0127 03:01:48.973515  946842 logs.go:282] 1 containers: [2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641]
	I0127 03:01:48.973582  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:48.979095  946842 logs.go:123] Gathering logs for kubelet ...
	I0127 03:01:48.979135  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:01:49.093508  946842 logs.go:123] Gathering logs for dmesg ...
	I0127 03:01:49.093544  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:01:49.112196  946842 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:01:49.112243  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:01:49.195864  946842 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:01:49.195895  946842 logs.go:123] Gathering logs for coredns [6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e] ...
	I0127 03:01:49.195913  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e"
	I0127 03:01:49.237173  946842 logs.go:123] Gathering logs for kube-scheduler [a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a] ...
	I0127 03:01:49.237215  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a"
	I0127 03:01:49.323212  946842 logs.go:123] Gathering logs for storage-provisioner [2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641] ...
	I0127 03:01:49.323257  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641"
	I0127 03:01:49.365069  946842 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:01:49.365114  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:01:49.721195  946842 logs.go:123] Gathering logs for kube-apiserver [ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922] ...
	I0127 03:01:49.721247  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922"
	I0127 03:01:49.762559  946842 logs.go:123] Gathering logs for etcd [b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0] ...
	I0127 03:01:49.762603  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0"
	I0127 03:01:49.805886  946842 logs.go:123] Gathering logs for kube-scheduler [627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303] ...
	I0127 03:01:49.805929  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303"
	I0127 03:01:49.844003  946842 logs.go:123] Gathering logs for container status ...
	I0127 03:01:49.844050  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:01:49.892217  946842 logs.go:123] Gathering logs for coredns [3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef] ...
	I0127 03:01:49.892263  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef"
	I0127 03:01:49.944292  946842 logs.go:123] Gathering logs for coredns [de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e] ...
	I0127 03:01:49.944328  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e"
	I0127 03:01:49.994400  946842 logs.go:123] Gathering logs for coredns [9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c] ...
	I0127 03:01:49.994439  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c"
	I0127 03:01:48.658412  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:48.670985  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:01:48.671075  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:01:48.711794  948597 cri.go:89] found id: ""
	I0127 03:01:48.711828  948597 logs.go:282] 0 containers: []
	W0127 03:01:48.711840  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:01:48.711849  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:01:48.711925  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:01:48.754553  948597 cri.go:89] found id: ""
	I0127 03:01:48.754581  948597 logs.go:282] 0 containers: []
	W0127 03:01:48.754592  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:01:48.754600  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:01:48.754667  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:01:48.799891  948597 cri.go:89] found id: ""
	I0127 03:01:48.799917  948597 logs.go:282] 0 containers: []
	W0127 03:01:48.799927  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:01:48.799936  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:01:48.800002  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:01:48.839365  948597 cri.go:89] found id: ""
	I0127 03:01:48.839405  948597 logs.go:282] 0 containers: []
	W0127 03:01:48.839417  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:01:48.839426  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:01:48.839500  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:01:48.888994  948597 cri.go:89] found id: ""
	I0127 03:01:48.889027  948597 logs.go:282] 0 containers: []
	W0127 03:01:48.889038  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:01:48.889046  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:01:48.889126  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:01:48.926255  948597 cri.go:89] found id: ""
	I0127 03:01:48.926290  948597 logs.go:282] 0 containers: []
	W0127 03:01:48.926301  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:01:48.926310  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:01:48.926406  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:01:48.964873  948597 cri.go:89] found id: ""
	I0127 03:01:48.964905  948597 logs.go:282] 0 containers: []
	W0127 03:01:48.964916  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:01:48.964945  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:01:48.965016  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:01:49.006585  948597 cri.go:89] found id: ""
	I0127 03:01:49.006617  948597 logs.go:282] 0 containers: []
	W0127 03:01:49.006627  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:01:49.006638  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:01:49.006653  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:01:49.073243  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:01:49.073293  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:01:49.089518  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:01:49.089553  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:01:49.174857  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:01:49.174892  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:01:49.174909  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:01:49.271349  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:01:49.271404  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:01:49.048635  949037 pod_ready.go:103] pod "etcd-embed-certs-896179" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:51.548851  949037 pod_ready.go:103] pod "etcd-embed-certs-896179" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:52.547746  949037 pod_ready.go:93] pod "etcd-embed-certs-896179" in "kube-system" namespace has status "Ready":"True"
	I0127 03:01:52.547769  949037 pod_ready.go:82] duration metric: took 5.506336269s for pod "etcd-embed-certs-896179" in "kube-system" namespace to be "Ready" ...
	I0127 03:01:52.547780  949037 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-896179" in "kube-system" namespace to be "Ready" ...
	I0127 03:01:52.552730  949037 pod_ready.go:93] pod "kube-apiserver-embed-certs-896179" in "kube-system" namespace has status "Ready":"True"
	I0127 03:01:52.552757  949037 pod_ready.go:82] duration metric: took 4.969414ms for pod "kube-apiserver-embed-certs-896179" in "kube-system" namespace to be "Ready" ...
	I0127 03:01:52.552770  949037 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-896179" in "kube-system" namespace to be "Ready" ...
	I0127 03:01:52.558137  949037 pod_ready.go:93] pod "kube-controller-manager-embed-certs-896179" in "kube-system" namespace has status "Ready":"True"
	I0127 03:01:52.558166  949037 pod_ready.go:82] duration metric: took 5.384569ms for pod "kube-controller-manager-embed-certs-896179" in "kube-system" namespace to be "Ready" ...
	I0127 03:01:52.558180  949037 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bvqkk" in "kube-system" namespace to be "Ready" ...
	I0127 03:01:52.564045  949037 pod_ready.go:93] pod "kube-proxy-bvqkk" in "kube-system" namespace has status "Ready":"True"
	I0127 03:01:52.564073  949037 pod_ready.go:82] duration metric: took 5.88473ms for pod "kube-proxy-bvqkk" in "kube-system" namespace to be "Ready" ...
	I0127 03:01:52.564086  949037 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-896179" in "kube-system" namespace to be "Ready" ...
	I0127 03:01:52.569379  949037 pod_ready.go:93] pod "kube-scheduler-embed-certs-896179" in "kube-system" namespace has status "Ready":"True"
	I0127 03:01:52.569401  949037 pod_ready.go:82] duration metric: took 5.306524ms for pod "kube-scheduler-embed-certs-896179" in "kube-system" namespace to be "Ready" ...
	I0127 03:01:52.569413  949037 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-f79f97bbb-2bcfv" in "kube-system" namespace to be "Ready" ...
	I0127 03:01:50.240701  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:52.241003  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:54.740390  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:50.031673  946842 logs.go:123] Gathering logs for kube-controller-manager [da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e] ...
	I0127 03:01:50.031732  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e"
	I0127 03:01:52.603701  946842 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0127 03:01:52.604338  946842 api_server.go:269] stopped: https://192.168.50.96:8443/healthz: Get "https://192.168.50.96:8443/healthz": dial tcp 192.168.50.96:8443: connect: connection refused
	I0127 03:01:52.604401  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:01:52.604454  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:01:52.644567  946842 cri.go:89] found id: "ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922"
	I0127 03:01:52.644595  946842 cri.go:89] found id: ""
	I0127 03:01:52.644606  946842 logs.go:282] 1 containers: [ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922]
	I0127 03:01:52.644671  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:52.649630  946842 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:01:52.649709  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:01:52.682418  946842 cri.go:89] found id: "b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0"
	I0127 03:01:52.682458  946842 cri.go:89] found id: ""
	I0127 03:01:52.682469  946842 logs.go:282] 1 containers: [b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0]
	I0127 03:01:52.682543  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:52.687109  946842 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:01:52.687169  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:01:52.734465  946842 cri.go:89] found id: "3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef"
	I0127 03:01:52.734506  946842 cri.go:89] found id: "de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e"
	I0127 03:01:52.734514  946842 cri.go:89] found id: "9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c"
	I0127 03:01:52.734518  946842 cri.go:89] found id: "6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e"
	I0127 03:01:52.734523  946842 cri.go:89] found id: ""
	I0127 03:01:52.734534  946842 logs.go:282] 4 containers: [3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e 9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c 6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e]
	I0127 03:01:52.734606  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:52.741394  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:52.745952  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:52.750025  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:52.754039  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:01:52.754127  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:01:52.797441  946842 cri.go:89] found id: "a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a"
	I0127 03:01:52.797467  946842 cri.go:89] found id: "627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303"
	I0127 03:01:52.797473  946842 cri.go:89] found id: ""
	I0127 03:01:52.797485  946842 logs.go:282] 2 containers: [a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a 627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303]
	I0127 03:01:52.797539  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:52.802129  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:52.806186  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:01:52.806248  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:01:52.846256  946842 cri.go:89] found id: ""
	I0127 03:01:52.846291  946842 logs.go:282] 0 containers: []
	W0127 03:01:52.846300  946842 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:01:52.846307  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:01:52.846370  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:01:52.884140  946842 cri.go:89] found id: "da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e"
	I0127 03:01:52.884168  946842 cri.go:89] found id: ""
	I0127 03:01:52.884177  946842 logs.go:282] 1 containers: [da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e]
	I0127 03:01:52.884230  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:52.888439  946842 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:01:52.888512  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:01:52.925905  946842 cri.go:89] found id: ""
	I0127 03:01:52.925945  946842 logs.go:282] 0 containers: []
	W0127 03:01:52.925954  946842 logs.go:284] No container was found matching "kindnet"
	I0127 03:01:52.925962  946842 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0127 03:01:52.926034  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0127 03:01:52.958787  946842 cri.go:89] found id: "2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641"
	I0127 03:01:52.958819  946842 cri.go:89] found id: ""
	I0127 03:01:52.958829  946842 logs.go:282] 1 containers: [2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641]
	I0127 03:01:52.958893  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:52.962896  946842 logs.go:123] Gathering logs for dmesg ...
	I0127 03:01:52.962920  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:01:52.977546  946842 logs.go:123] Gathering logs for kube-apiserver [ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922] ...
	I0127 03:01:52.977588  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922"
	I0127 03:01:53.019558  946842 logs.go:123] Gathering logs for coredns [de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e] ...
	I0127 03:01:53.019592  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e"
	I0127 03:01:53.061947  946842 logs.go:123] Gathering logs for kube-scheduler [a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a] ...
	I0127 03:01:53.061983  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a"
	I0127 03:01:53.141971  946842 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:01:53.142012  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:01:53.501564  946842 logs.go:123] Gathering logs for coredns [3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef] ...
	I0127 03:01:53.501620  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef"
	I0127 03:01:53.551653  946842 logs.go:123] Gathering logs for coredns [6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e] ...
	I0127 03:01:53.551689  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e"
	I0127 03:01:53.591409  946842 logs.go:123] Gathering logs for storage-provisioner [2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641] ...
	I0127 03:01:53.591450  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641"
	I0127 03:01:53.628981  946842 logs.go:123] Gathering logs for container status ...
	I0127 03:01:53.629020  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:01:53.679804  946842 logs.go:123] Gathering logs for kubelet ...
	I0127 03:01:53.679850  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:01:53.778299  946842 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:01:53.778346  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:01:53.853020  946842 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:01:53.853057  946842 logs.go:123] Gathering logs for etcd [b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0] ...
	I0127 03:01:53.853076  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0"
	I0127 03:01:53.897276  946842 logs.go:123] Gathering logs for coredns [9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c] ...
	I0127 03:01:53.897314  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c"
	I0127 03:01:53.933139  946842 logs.go:123] Gathering logs for kube-scheduler [627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303] ...
	I0127 03:01:53.933174  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303"
	I0127 03:01:53.967280  946842 logs.go:123] Gathering logs for kube-controller-manager [da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e] ...
	I0127 03:01:53.967314  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e"
	I0127 03:01:51.821324  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:51.839569  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:01:51.839646  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:01:51.877408  948597 cri.go:89] found id: ""
	I0127 03:01:51.877437  948597 logs.go:282] 0 containers: []
	W0127 03:01:51.877444  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:01:51.877450  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:01:51.877506  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:01:51.911605  948597 cri.go:89] found id: ""
	I0127 03:01:51.911654  948597 logs.go:282] 0 containers: []
	W0127 03:01:51.911667  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:01:51.911676  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:01:51.911748  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:01:51.947033  948597 cri.go:89] found id: ""
	I0127 03:01:51.947078  948597 logs.go:282] 0 containers: []
	W0127 03:01:51.947092  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:01:51.947101  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:01:51.947164  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:01:51.979689  948597 cri.go:89] found id: ""
	I0127 03:01:51.979725  948597 logs.go:282] 0 containers: []
	W0127 03:01:51.979736  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:01:51.979744  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:01:51.979826  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:01:52.015971  948597 cri.go:89] found id: ""
	I0127 03:01:52.016011  948597 logs.go:282] 0 containers: []
	W0127 03:01:52.016023  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:01:52.016031  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:01:52.016105  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:01:52.050395  948597 cri.go:89] found id: ""
	I0127 03:01:52.050427  948597 logs.go:282] 0 containers: []
	W0127 03:01:52.050437  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:01:52.050446  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:01:52.050515  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:01:52.084279  948597 cri.go:89] found id: ""
	I0127 03:01:52.084315  948597 logs.go:282] 0 containers: []
	W0127 03:01:52.084327  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:01:52.084336  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:01:52.084411  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:01:52.118989  948597 cri.go:89] found id: ""
	I0127 03:01:52.119022  948597 logs.go:282] 0 containers: []
	W0127 03:01:52.119034  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:01:52.119047  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:01:52.119074  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:01:52.180108  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:01:52.180151  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:01:52.194532  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:01:52.194584  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:01:52.267927  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:01:52.267951  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:01:52.267975  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:01:52.345103  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:01:52.345145  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:01:54.884393  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:54.897841  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:01:54.897943  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:01:54.932485  948597 cri.go:89] found id: ""
	I0127 03:01:54.932524  948597 logs.go:282] 0 containers: []
	W0127 03:01:54.932536  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:01:54.932545  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:01:54.932689  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:01:54.968368  948597 cri.go:89] found id: ""
	I0127 03:01:54.968400  948597 logs.go:282] 0 containers: []
	W0127 03:01:54.968412  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:01:54.968419  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:01:54.968484  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:01:55.001707  948597 cri.go:89] found id: ""
	I0127 03:01:55.001743  948597 logs.go:282] 0 containers: []
	W0127 03:01:55.001755  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:01:55.001762  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:01:55.001835  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:01:55.037616  948597 cri.go:89] found id: ""
	I0127 03:01:55.037654  948597 logs.go:282] 0 containers: []
	W0127 03:01:55.037665  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:01:55.037672  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:01:55.037740  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:01:55.079188  948597 cri.go:89] found id: ""
	I0127 03:01:55.079219  948597 logs.go:282] 0 containers: []
	W0127 03:01:55.079230  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:01:55.079251  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:01:55.079342  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:01:55.128821  948597 cri.go:89] found id: ""
	I0127 03:01:55.128855  948597 logs.go:282] 0 containers: []
	W0127 03:01:55.128864  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:01:55.128872  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:01:55.128969  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:01:55.170723  948597 cri.go:89] found id: ""
	I0127 03:01:55.170751  948597 logs.go:282] 0 containers: []
	W0127 03:01:55.170759  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:01:55.170765  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:01:55.170818  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:01:55.207344  948597 cri.go:89] found id: ""
	I0127 03:01:55.207385  948597 logs.go:282] 0 containers: []
	W0127 03:01:55.207398  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:01:55.207408  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:01:55.207422  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:01:55.288046  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:01:55.288078  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:01:55.288097  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:01:55.366433  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:01:55.366484  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:01:55.403270  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:01:55.403317  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:01:55.455241  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:01:55.455298  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:01:54.579959  949037 pod_ready.go:103] pod "metrics-server-f79f97bbb-2bcfv" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:57.081758  949037 pod_ready.go:103] pod "metrics-server-f79f97bbb-2bcfv" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:56.740491  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:58.744370  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:56.514561  946842 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0127 03:01:56.515233  946842 api_server.go:269] stopped: https://192.168.50.96:8443/healthz: Get "https://192.168.50.96:8443/healthz": dial tcp 192.168.50.96:8443: connect: connection refused
	I0127 03:01:56.515299  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:01:56.515361  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:01:56.563908  946842 cri.go:89] found id: "ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922"
	I0127 03:01:56.563938  946842 cri.go:89] found id: ""
	I0127 03:01:56.563958  946842 logs.go:282] 1 containers: [ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922]
	I0127 03:01:56.564020  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:56.568256  946842 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:01:56.568342  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:01:56.614465  946842 cri.go:89] found id: "b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0"
	I0127 03:01:56.614503  946842 cri.go:89] found id: ""
	I0127 03:01:56.614514  946842 logs.go:282] 1 containers: [b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0]
	I0127 03:01:56.614584  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:56.618417  946842 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:01:56.618499  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:01:56.655878  946842 cri.go:89] found id: "3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef"
	I0127 03:01:56.655908  946842 cri.go:89] found id: "de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e"
	I0127 03:01:56.655914  946842 cri.go:89] found id: "9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c"
	I0127 03:01:56.655919  946842 cri.go:89] found id: "6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e"
	I0127 03:01:56.655923  946842 cri.go:89] found id: ""
	I0127 03:01:56.655933  946842 logs.go:282] 4 containers: [3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e 9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c 6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e]
	I0127 03:01:56.656003  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:56.660206  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:56.663982  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:56.667732  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:56.671301  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:01:56.671363  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:01:56.713042  946842 cri.go:89] found id: "a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a"
	I0127 03:01:56.713067  946842 cri.go:89] found id: "627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303"
	I0127 03:01:56.713076  946842 cri.go:89] found id: ""
	I0127 03:01:56.713084  946842 logs.go:282] 2 containers: [a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a 627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303]
	I0127 03:01:56.713165  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:56.718198  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:56.723186  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:01:56.723269  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:01:56.768966  946842 cri.go:89] found id: ""
	I0127 03:01:56.768999  946842 logs.go:282] 0 containers: []
	W0127 03:01:56.769011  946842 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:01:56.769019  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:01:56.769097  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:01:56.809339  946842 cri.go:89] found id: "da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e"
	I0127 03:01:56.809372  946842 cri.go:89] found id: ""
	I0127 03:01:56.809383  946842 logs.go:282] 1 containers: [da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e]
	I0127 03:01:56.809452  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:56.823148  946842 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:01:56.823245  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:01:56.859900  946842 cri.go:89] found id: ""
	I0127 03:01:56.859939  946842 logs.go:282] 0 containers: []
	W0127 03:01:56.859953  946842 logs.go:284] No container was found matching "kindnet"
	I0127 03:01:56.859962  946842 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0127 03:01:56.860037  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0127 03:01:56.896995  946842 cri.go:89] found id: "2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641"
	I0127 03:01:56.897024  946842 cri.go:89] found id: ""
	I0127 03:01:56.897036  946842 logs.go:282] 1 containers: [2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641]
	I0127 03:01:56.897104  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:01:56.901240  946842 logs.go:123] Gathering logs for kube-apiserver [ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922] ...
	I0127 03:01:56.901275  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922"
	I0127 03:01:56.937875  946842 logs.go:123] Gathering logs for coredns [3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef] ...
	I0127 03:01:56.937921  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef"
	I0127 03:01:56.992301  946842 logs.go:123] Gathering logs for kube-scheduler [a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a] ...
	I0127 03:01:56.992341  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a"
	I0127 03:01:57.072212  946842 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:01:57.072249  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:01:57.430695  946842 logs.go:123] Gathering logs for dmesg ...
	I0127 03:01:57.430744  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:01:57.446794  946842 logs.go:123] Gathering logs for kube-controller-manager [da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e] ...
	I0127 03:01:57.446837  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e"
	I0127 03:01:57.511204  946842 logs.go:123] Gathering logs for container status ...
	I0127 03:01:57.511258  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:01:57.553353  946842 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:01:57.553390  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:01:57.628343  946842 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:01:57.628372  946842 logs.go:123] Gathering logs for coredns [de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e] ...
	I0127 03:01:57.628394  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e"
	I0127 03:01:57.680152  946842 logs.go:123] Gathering logs for coredns [6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e] ...
	I0127 03:01:57.680206  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e"
	I0127 03:01:57.715865  946842 logs.go:123] Gathering logs for kube-scheduler [627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303] ...
	I0127 03:01:57.715902  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303"
	I0127 03:01:57.763492  946842 logs.go:123] Gathering logs for kubelet ...
	I0127 03:01:57.763539  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:01:57.902558  946842 logs.go:123] Gathering logs for etcd [b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0] ...
	I0127 03:01:57.902603  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0"
	I0127 03:01:57.952089  946842 logs.go:123] Gathering logs for coredns [9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c] ...
	I0127 03:01:57.952137  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c"
	I0127 03:01:57.997608  946842 logs.go:123] Gathering logs for storage-provisioner [2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641] ...
	I0127 03:01:57.997647  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641"
	I0127 03:01:57.970581  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:57.987960  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:01:57.988048  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:01:58.035438  948597 cri.go:89] found id: ""
	I0127 03:01:58.035475  948597 logs.go:282] 0 containers: []
	W0127 03:01:58.035485  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:01:58.035494  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:01:58.035565  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:01:58.071013  948597 cri.go:89] found id: ""
	I0127 03:01:58.071053  948597 logs.go:282] 0 containers: []
	W0127 03:01:58.071065  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:01:58.071073  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:01:58.071148  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:01:58.111925  948597 cri.go:89] found id: ""
	I0127 03:01:58.111964  948597 logs.go:282] 0 containers: []
	W0127 03:01:58.111976  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:01:58.111983  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:01:58.112053  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:01:58.146183  948597 cri.go:89] found id: ""
	I0127 03:01:58.146220  948597 logs.go:282] 0 containers: []
	W0127 03:01:58.146230  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:01:58.146238  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:01:58.146310  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:01:58.184977  948597 cri.go:89] found id: ""
	I0127 03:01:58.185005  948597 logs.go:282] 0 containers: []
	W0127 03:01:58.185013  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:01:58.185019  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:01:58.185085  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:01:58.223037  948597 cri.go:89] found id: ""
	I0127 03:01:58.223073  948597 logs.go:282] 0 containers: []
	W0127 03:01:58.223084  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:01:58.223093  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:01:58.223174  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:01:58.258659  948597 cri.go:89] found id: ""
	I0127 03:01:58.258687  948597 logs.go:282] 0 containers: []
	W0127 03:01:58.258695  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:01:58.258701  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:01:58.258753  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:01:58.296174  948597 cri.go:89] found id: ""
	I0127 03:01:58.296209  948597 logs.go:282] 0 containers: []
	W0127 03:01:58.296220  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:01:58.296233  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:01:58.296256  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:01:58.309974  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:01:58.310009  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:01:58.397312  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:01:58.397338  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:01:58.397352  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:01:58.482188  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:01:58.482247  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:01:58.526400  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:01:58.526441  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:02:01.086115  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:01.098319  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:02:01.098400  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:02:01.135609  948597 cri.go:89] found id: ""
	I0127 03:02:01.135645  948597 logs.go:282] 0 containers: []
	W0127 03:02:01.135657  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:02:01.135665  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:02:01.135739  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:02:01.174294  948597 cri.go:89] found id: ""
	I0127 03:02:01.174329  948597 logs.go:282] 0 containers: []
	W0127 03:02:01.174340  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:02:01.174347  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:02:01.174422  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:02:01.210942  948597 cri.go:89] found id: ""
	I0127 03:02:01.210976  948597 logs.go:282] 0 containers: []
	W0127 03:02:01.210987  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:02:01.210995  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:02:01.211069  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:02:01.249566  948597 cri.go:89] found id: ""
	I0127 03:02:01.249599  948597 logs.go:282] 0 containers: []
	W0127 03:02:01.249610  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:02:01.249619  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:02:01.249696  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:02:01.289367  948597 cri.go:89] found id: ""
	I0127 03:02:01.289405  948597 logs.go:282] 0 containers: []
	W0127 03:02:01.289415  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:02:01.289423  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:02:01.289489  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:02:01.324768  948597 cri.go:89] found id: ""
	I0127 03:02:01.324806  948597 logs.go:282] 0 containers: []
	W0127 03:02:01.324816  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:02:01.324824  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:02:01.324876  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:02:01.363159  948597 cri.go:89] found id: ""
	I0127 03:02:01.363192  948597 logs.go:282] 0 containers: []
	W0127 03:02:01.363204  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:02:01.363211  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:02:01.363279  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:02:01.401686  948597 cri.go:89] found id: ""
	I0127 03:02:01.401715  948597 logs.go:282] 0 containers: []
	W0127 03:02:01.401724  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:02:01.401735  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:02:01.401746  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:02:01.443049  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:02:01.443093  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:02:01.495506  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:02:01.495548  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:02:01.509294  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:02:01.509329  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:02:01.574977  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:02:01.575010  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:02:01.575025  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:01:59.583240  949037 pod_ready.go:103] pod "metrics-server-f79f97bbb-2bcfv" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:02.075630  949037 pod_ready.go:103] pod "metrics-server-f79f97bbb-2bcfv" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:01.240785  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:03.739769  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:00.544866  946842 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0127 03:02:00.545479  946842 api_server.go:269] stopped: https://192.168.50.96:8443/healthz: Get "https://192.168.50.96:8443/healthz": dial tcp 192.168.50.96:8443: connect: connection refused
	I0127 03:02:00.545548  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:02:00.545595  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:02:00.582922  946842 cri.go:89] found id: "ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922"
	I0127 03:02:00.582950  946842 cri.go:89] found id: ""
	I0127 03:02:00.582962  946842 logs.go:282] 1 containers: [ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922]
	I0127 03:02:00.583017  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:00.587813  946842 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:02:00.587886  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:02:00.623941  946842 cri.go:89] found id: "b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0"
	I0127 03:02:00.623974  946842 cri.go:89] found id: ""
	I0127 03:02:00.623985  946842 logs.go:282] 1 containers: [b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0]
	I0127 03:02:00.624052  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:00.628052  946842 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:02:00.628136  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:02:00.664973  946842 cri.go:89] found id: "3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef"
	I0127 03:02:00.665007  946842 cri.go:89] found id: "de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e"
	I0127 03:02:00.665012  946842 cri.go:89] found id: "9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c"
	I0127 03:02:00.665017  946842 cri.go:89] found id: "6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e"
	I0127 03:02:00.665022  946842 cri.go:89] found id: ""
	I0127 03:02:00.665031  946842 logs.go:282] 4 containers: [3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e 9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c 6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e]
	I0127 03:02:00.665115  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:00.669095  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:00.672770  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:00.676555  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:00.679993  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:02:00.680062  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:02:00.713704  946842 cri.go:89] found id: "a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a"
	I0127 03:02:00.713734  946842 cri.go:89] found id: "627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303"
	I0127 03:02:00.713741  946842 cri.go:89] found id: ""
	I0127 03:02:00.713750  946842 logs.go:282] 2 containers: [a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a 627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303]
	I0127 03:02:00.713825  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:00.717933  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:00.721978  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:02:00.722050  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:02:00.759552  946842 cri.go:89] found id: ""
	I0127 03:02:00.759585  946842 logs.go:282] 0 containers: []
	W0127 03:02:00.759593  946842 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:02:00.759600  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:02:00.759659  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:02:00.798615  946842 cri.go:89] found id: "da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e"
	I0127 03:02:00.798648  946842 cri.go:89] found id: ""
	I0127 03:02:00.798661  946842 logs.go:282] 1 containers: [da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e]
	I0127 03:02:00.798724  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:00.802728  946842 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:02:00.802818  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:02:00.838427  946842 cri.go:89] found id: ""
	I0127 03:02:00.838462  946842 logs.go:282] 0 containers: []
	W0127 03:02:00.838476  946842 logs.go:284] No container was found matching "kindnet"
	I0127 03:02:00.838485  946842 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0127 03:02:00.838546  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0127 03:02:00.881923  946842 cri.go:89] found id: "2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641"
	I0127 03:02:00.881948  946842 cri.go:89] found id: ""
	I0127 03:02:00.881956  946842 logs.go:282] 1 containers: [2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641]
	I0127 03:02:00.882007  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:00.885896  946842 logs.go:123] Gathering logs for kube-scheduler [627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303] ...
	I0127 03:02:00.885918  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303"
	I0127 03:02:00.922249  946842 logs.go:123] Gathering logs for kube-controller-manager [da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e] ...
	I0127 03:02:00.922282  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e"
	I0127 03:02:00.980592  946842 logs.go:123] Gathering logs for container status ...
	I0127 03:02:00.980633  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:02:01.033354  946842 logs.go:123] Gathering logs for kubelet ...
	I0127 03:02:01.033392  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:02:01.142310  946842 logs.go:123] Gathering logs for coredns [de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e] ...
	I0127 03:02:01.142347  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e"
	I0127 03:02:01.186504  946842 logs.go:123] Gathering logs for coredns [3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef] ...
	I0127 03:02:01.186551  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef"
	I0127 03:02:01.230670  946842 logs.go:123] Gathering logs for storage-provisioner [2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641] ...
	I0127 03:02:01.230706  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641"
	I0127 03:02:01.267491  946842 logs.go:123] Gathering logs for dmesg ...
	I0127 03:02:01.267532  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:02:01.283944  946842 logs.go:123] Gathering logs for etcd [b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0] ...
	I0127 03:02:01.283983  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0"
	I0127 03:02:01.327596  946842 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:02:01.327632  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:02:01.688680  946842 logs.go:123] Gathering logs for coredns [9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c] ...
	I0127 03:02:01.688752  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c"
	I0127 03:02:01.725157  946842 logs.go:123] Gathering logs for kube-scheduler [a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a] ...
	I0127 03:02:01.725195  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a"
	I0127 03:02:01.803063  946842 logs.go:123] Gathering logs for coredns [6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e] ...
	I0127 03:02:01.803108  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e"
	I0127 03:02:01.838687  946842 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:02:01.838716  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:02:01.908059  946842 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:02:01.908102  946842 logs.go:123] Gathering logs for kube-apiserver [ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922] ...
	I0127 03:02:01.908124  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922"
	I0127 03:02:04.454963  946842 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0127 03:02:04.455634  946842 api_server.go:269] stopped: https://192.168.50.96:8443/healthz: Get "https://192.168.50.96:8443/healthz": dial tcp 192.168.50.96:8443: connect: connection refused
	I0127 03:02:04.455691  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:02:04.455754  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:02:04.497078  946842 cri.go:89] found id: "ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922"
	I0127 03:02:04.497104  946842 cri.go:89] found id: ""
	I0127 03:02:04.497116  946842 logs.go:282] 1 containers: [ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922]
	I0127 03:02:04.497183  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:04.501562  946842 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:02:04.501634  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:02:04.543589  946842 cri.go:89] found id: "b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0"
	I0127 03:02:04.543620  946842 cri.go:89] found id: ""
	I0127 03:02:04.543630  946842 logs.go:282] 1 containers: [b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0]
	I0127 03:02:04.543691  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:04.547697  946842 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:02:04.547763  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:02:04.585473  946842 cri.go:89] found id: "3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef"
	I0127 03:02:04.585496  946842 cri.go:89] found id: "de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e"
	I0127 03:02:04.585500  946842 cri.go:89] found id: "9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c"
	I0127 03:02:04.585503  946842 cri.go:89] found id: "6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e"
	I0127 03:02:04.585506  946842 cri.go:89] found id: ""
	I0127 03:02:04.585514  946842 logs.go:282] 4 containers: [3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e 9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c 6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e]
	I0127 03:02:04.585564  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:04.589688  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:04.593891  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:04.598827  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:04.602485  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:02:04.602548  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:02:04.642290  946842 cri.go:89] found id: "a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a"
	I0127 03:02:04.642316  946842 cri.go:89] found id: "627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303"
	I0127 03:02:04.642322  946842 cri.go:89] found id: ""
	I0127 03:02:04.642332  946842 logs.go:282] 2 containers: [a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a 627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303]
	I0127 03:02:04.642398  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:04.646223  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:04.649912  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:02:04.649986  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:02:04.691204  946842 cri.go:89] found id: ""
	I0127 03:02:04.691241  946842 logs.go:282] 0 containers: []
	W0127 03:02:04.691253  946842 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:02:04.691261  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:02:04.691328  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:02:04.727819  946842 cri.go:89] found id: "da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e"
	I0127 03:02:04.727850  946842 cri.go:89] found id: ""
	I0127 03:02:04.727861  946842 logs.go:282] 1 containers: [da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e]
	I0127 03:02:04.727937  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:04.732038  946842 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:02:04.732119  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:02:04.767979  946842 cri.go:89] found id: ""
	I0127 03:02:04.768011  946842 logs.go:282] 0 containers: []
	W0127 03:02:04.768019  946842 logs.go:284] No container was found matching "kindnet"
	I0127 03:02:04.768026  946842 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0127 03:02:04.768081  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0127 03:02:04.804378  946842 cri.go:89] found id: "2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641"
	I0127 03:02:04.804404  946842 cri.go:89] found id: ""
	I0127 03:02:04.804413  946842 logs.go:282] 1 containers: [2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641]
	I0127 03:02:04.804468  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:04.809758  946842 logs.go:123] Gathering logs for kube-apiserver [ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922] ...
	I0127 03:02:04.809792  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922"
	I0127 03:02:04.846150  946842 logs.go:123] Gathering logs for coredns [6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e] ...
	I0127 03:02:04.846180  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e"
	I0127 03:02:04.884466  946842 logs.go:123] Gathering logs for container status ...
	I0127 03:02:04.884497  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:02:04.925438  946842 logs.go:123] Gathering logs for kubelet ...
	I0127 03:02:04.925470  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:02:04.174983  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:04.187588  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:02:04.187668  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:02:04.223414  948597 cri.go:89] found id: ""
	I0127 03:02:04.223448  948597 logs.go:282] 0 containers: []
	W0127 03:02:04.223457  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:02:04.223463  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:02:04.223527  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:02:04.259031  948597 cri.go:89] found id: ""
	I0127 03:02:04.259071  948597 logs.go:282] 0 containers: []
	W0127 03:02:04.259083  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:02:04.259091  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:02:04.259165  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:02:04.290320  948597 cri.go:89] found id: ""
	I0127 03:02:04.290357  948597 logs.go:282] 0 containers: []
	W0127 03:02:04.290368  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:02:04.290374  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:02:04.290429  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:02:04.322432  948597 cri.go:89] found id: ""
	I0127 03:02:04.322463  948597 logs.go:282] 0 containers: []
	W0127 03:02:04.322472  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:02:04.322478  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:02:04.322533  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:02:04.356422  948597 cri.go:89] found id: ""
	I0127 03:02:04.356458  948597 logs.go:282] 0 containers: []
	W0127 03:02:04.356466  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:02:04.356472  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:02:04.356526  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:02:04.392999  948597 cri.go:89] found id: ""
	I0127 03:02:04.393034  948597 logs.go:282] 0 containers: []
	W0127 03:02:04.393046  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:02:04.393054  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:02:04.393125  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:02:04.426275  948597 cri.go:89] found id: ""
	I0127 03:02:04.426305  948597 logs.go:282] 0 containers: []
	W0127 03:02:04.426312  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:02:04.426318  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:02:04.426370  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:02:04.460208  948597 cri.go:89] found id: ""
	I0127 03:02:04.460234  948597 logs.go:282] 0 containers: []
	W0127 03:02:04.460242  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:02:04.460252  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:02:04.460263  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:02:04.501349  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:02:04.501387  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:02:04.550576  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:02:04.550611  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:02:04.565042  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:02:04.565081  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:02:04.659906  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:02:04.659935  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:02:04.659953  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:02:04.075759  949037 pod_ready.go:103] pod "metrics-server-f79f97bbb-2bcfv" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:06.076664  949037 pod_ready.go:103] pod "metrics-server-f79f97bbb-2bcfv" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:05.741665  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:08.240445  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:05.028125  946842 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:02:05.028170  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:02:05.103945  946842 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:02:05.103972  946842 logs.go:123] Gathering logs for coredns [9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c] ...
	I0127 03:02:05.103986  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c"
	I0127 03:02:05.137399  946842 logs.go:123] Gathering logs for kube-controller-manager [da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e] ...
	I0127 03:02:05.137433  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e"
	I0127 03:02:05.185672  946842 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:02:05.185716  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:02:05.526160  946842 logs.go:123] Gathering logs for coredns [de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e] ...
	I0127 03:02:05.526204  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e"
	I0127 03:02:05.577235  946842 logs.go:123] Gathering logs for kube-scheduler [627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303] ...
	I0127 03:02:05.577269  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303"
	I0127 03:02:05.614168  946842 logs.go:123] Gathering logs for storage-provisioner [2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641] ...
	I0127 03:02:05.614201  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641"
	I0127 03:02:05.647773  946842 logs.go:123] Gathering logs for dmesg ...
	I0127 03:02:05.647805  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:02:05.661034  946842 logs.go:123] Gathering logs for etcd [b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0] ...
	I0127 03:02:05.661092  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0"
	I0127 03:02:05.703311  946842 logs.go:123] Gathering logs for coredns [3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef] ...
	I0127 03:02:05.703348  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef"
	I0127 03:02:05.749780  946842 logs.go:123] Gathering logs for kube-scheduler [a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a] ...
	I0127 03:02:05.749814  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a"
	I0127 03:02:08.336997  946842 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0127 03:02:08.337638  946842 api_server.go:269] stopped: https://192.168.50.96:8443/healthz: Get "https://192.168.50.96:8443/healthz": dial tcp 192.168.50.96:8443: connect: connection refused
	I0127 03:02:08.337696  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:02:08.337751  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:02:08.379844  946842 cri.go:89] found id: "ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922"
	I0127 03:02:08.379875  946842 cri.go:89] found id: ""
	I0127 03:02:08.379886  946842 logs.go:282] 1 containers: [ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922]
	I0127 03:02:08.379946  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:08.384115  946842 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:02:08.384188  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:02:08.427994  946842 cri.go:89] found id: "b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0"
	I0127 03:02:08.428026  946842 cri.go:89] found id: ""
	I0127 03:02:08.428035  946842 logs.go:282] 1 containers: [b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0]
	I0127 03:02:08.428088  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:08.432318  946842 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:02:08.432417  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:02:08.471007  946842 cri.go:89] found id: "3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef"
	I0127 03:02:08.471038  946842 cri.go:89] found id: "de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e"
	I0127 03:02:08.471043  946842 cri.go:89] found id: "9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c"
	I0127 03:02:08.471046  946842 cri.go:89] found id: "6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e"
	I0127 03:02:08.471049  946842 cri.go:89] found id: ""
	I0127 03:02:08.471059  946842 logs.go:282] 4 containers: [3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e 9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c 6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e]
	I0127 03:02:08.471140  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:08.475423  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:08.479530  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:08.483667  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:08.487425  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:02:08.487493  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:02:08.529029  946842 cri.go:89] found id: "a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a"
	I0127 03:02:08.529062  946842 cri.go:89] found id: "627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303"
	I0127 03:02:08.529067  946842 cri.go:89] found id: ""
	I0127 03:02:08.529078  946842 logs.go:282] 2 containers: [a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a 627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303]
	I0127 03:02:08.529139  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:08.533671  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:08.537851  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:02:08.537922  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:02:08.573514  946842 cri.go:89] found id: ""
	I0127 03:02:08.573542  946842 logs.go:282] 0 containers: []
	W0127 03:02:08.573553  946842 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:02:08.573561  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:02:08.573626  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:02:08.614633  946842 cri.go:89] found id: "da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e"
	I0127 03:02:08.614662  946842 cri.go:89] found id: ""
	I0127 03:02:08.614671  946842 logs.go:282] 1 containers: [da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e]
	I0127 03:02:08.614727  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:08.618975  946842 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:02:08.619043  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:02:08.659090  946842 cri.go:89] found id: ""
	I0127 03:02:08.659132  946842 logs.go:282] 0 containers: []
	W0127 03:02:08.659144  946842 logs.go:284] No container was found matching "kindnet"
	I0127 03:02:08.659152  946842 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0127 03:02:08.659219  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0127 03:02:08.697004  946842 cri.go:89] found id: "2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641"
	I0127 03:02:08.697041  946842 cri.go:89] found id: ""
	I0127 03:02:08.697053  946842 logs.go:282] 1 containers: [2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641]
	I0127 03:02:08.697135  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:08.701188  946842 logs.go:123] Gathering logs for coredns [de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e] ...
	I0127 03:02:08.701216  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e"
	I0127 03:02:08.749086  946842 logs.go:123] Gathering logs for coredns [6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e] ...
	I0127 03:02:08.749124  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e"
	I0127 03:02:08.792659  946842 logs.go:123] Gathering logs for kube-scheduler [627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303] ...
	I0127 03:02:08.792694  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303"
	I0127 03:02:08.828769  946842 logs.go:123] Gathering logs for container status ...
	I0127 03:02:08.828814  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:02:08.867460  946842 logs.go:123] Gathering logs for etcd [b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0] ...
	I0127 03:02:08.867495  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0"
	I0127 03:02:08.911329  946842 logs.go:123] Gathering logs for coredns [9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c] ...
	I0127 03:02:08.911373  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c"
	I0127 03:02:08.948043  946842 logs.go:123] Gathering logs for kubelet ...
	I0127 03:02:08.948090  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:02:09.050489  946842 logs.go:123] Gathering logs for kube-scheduler [a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a] ...
	I0127 03:02:09.050539  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a"
	I0127 03:02:09.131230  946842 logs.go:123] Gathering logs for kube-controller-manager [da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e] ...
	I0127 03:02:09.131278  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e"
	I0127 03:02:09.180881  946842 logs.go:123] Gathering logs for dmesg ...
	I0127 03:02:09.180948  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:02:09.196194  946842 logs.go:123] Gathering logs for kube-apiserver [ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922] ...
	I0127 03:02:09.196227  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922"
	I0127 03:02:09.239893  946842 logs.go:123] Gathering logs for coredns [3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef] ...
	I0127 03:02:09.239929  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef"
	I0127 03:02:09.286170  946842 logs.go:123] Gathering logs for storage-provisioner [2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641] ...
	I0127 03:02:09.286212  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641"
	I0127 03:02:09.323266  946842 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:02:09.323305  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:02:09.647727  946842 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:02:09.647783  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:02:09.715449  946842 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:02:07.245086  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:07.257839  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:02:07.257908  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:02:07.296057  948597 cri.go:89] found id: ""
	I0127 03:02:07.296089  948597 logs.go:282] 0 containers: []
	W0127 03:02:07.296098  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:02:07.296104  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:02:07.296177  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:02:07.329833  948597 cri.go:89] found id: ""
	I0127 03:02:07.329886  948597 logs.go:282] 0 containers: []
	W0127 03:02:07.329914  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:02:07.329926  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:02:07.329994  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:02:07.364273  948597 cri.go:89] found id: ""
	I0127 03:02:07.364317  948597 logs.go:282] 0 containers: []
	W0127 03:02:07.364329  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:02:07.364337  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:02:07.364406  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:02:07.399224  948597 cri.go:89] found id: ""
	I0127 03:02:07.399262  948597 logs.go:282] 0 containers: []
	W0127 03:02:07.399274  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:02:07.399282  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:02:07.399377  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:02:07.437153  948597 cri.go:89] found id: ""
	I0127 03:02:07.437194  948597 logs.go:282] 0 containers: []
	W0127 03:02:07.437205  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:02:07.437213  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:02:07.437285  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:02:07.472191  948597 cri.go:89] found id: ""
	I0127 03:02:07.472221  948597 logs.go:282] 0 containers: []
	W0127 03:02:07.472230  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:02:07.472239  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:02:07.472295  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:02:07.507029  948597 cri.go:89] found id: ""
	I0127 03:02:07.507066  948597 logs.go:282] 0 containers: []
	W0127 03:02:07.507078  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:02:07.507086  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:02:07.507185  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:02:07.540312  948597 cri.go:89] found id: ""
	I0127 03:02:07.540348  948597 logs.go:282] 0 containers: []
	W0127 03:02:07.540360  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:02:07.540374  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:02:07.540392  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:02:07.589839  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:02:07.589893  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:02:07.603285  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:02:07.603321  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:02:07.679572  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:02:07.679597  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:02:07.679611  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:02:07.756859  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:02:07.756902  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:02:10.297730  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:10.310440  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:02:10.310510  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:02:10.343835  948597 cri.go:89] found id: ""
	I0127 03:02:10.343871  948597 logs.go:282] 0 containers: []
	W0127 03:02:10.343883  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:02:10.343891  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:02:10.343949  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:02:10.383557  948597 cri.go:89] found id: ""
	I0127 03:02:10.383594  948597 logs.go:282] 0 containers: []
	W0127 03:02:10.383605  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:02:10.383614  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:02:10.383695  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:02:10.426364  948597 cri.go:89] found id: ""
	I0127 03:02:10.426414  948597 logs.go:282] 0 containers: []
	W0127 03:02:10.426425  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:02:10.426432  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:02:10.426513  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:02:10.463567  948597 cri.go:89] found id: ""
	I0127 03:02:10.463621  948597 logs.go:282] 0 containers: []
	W0127 03:02:10.463633  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:02:10.463642  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:02:10.463705  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:02:10.498363  948597 cri.go:89] found id: ""
	I0127 03:02:10.498400  948597 logs.go:282] 0 containers: []
	W0127 03:02:10.498411  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:02:10.498419  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:02:10.498495  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:02:10.532805  948597 cri.go:89] found id: ""
	I0127 03:02:10.532835  948597 logs.go:282] 0 containers: []
	W0127 03:02:10.532847  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:02:10.532854  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:02:10.532951  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:02:10.568537  948597 cri.go:89] found id: ""
	I0127 03:02:10.568573  948597 logs.go:282] 0 containers: []
	W0127 03:02:10.568583  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:02:10.568590  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:02:10.568662  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:02:10.607965  948597 cri.go:89] found id: ""
	I0127 03:02:10.608002  948597 logs.go:282] 0 containers: []
	W0127 03:02:10.608013  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:02:10.608025  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:02:10.608040  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:02:10.658406  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:02:10.658447  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:02:10.671754  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:02:10.671801  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:02:10.741340  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:02:10.741367  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:02:10.741382  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:02:10.817535  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:02:10.817577  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:02:08.576711  949037 pod_ready.go:103] pod "metrics-server-f79f97bbb-2bcfv" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:10.577470  949037 pod_ready.go:103] pod "metrics-server-f79f97bbb-2bcfv" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:13.075332  949037 pod_ready.go:103] pod "metrics-server-f79f97bbb-2bcfv" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:10.741391  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:12.741493  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:12.217012  946842 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0127 03:02:12.217719  946842 api_server.go:269] stopped: https://192.168.50.96:8443/healthz: Get "https://192.168.50.96:8443/healthz": dial tcp 192.168.50.96:8443: connect: connection refused
	I0127 03:02:12.217796  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:02:12.217852  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:02:12.264525  946842 cri.go:89] found id: "ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922"
	I0127 03:02:12.264551  946842 cri.go:89] found id: ""
	I0127 03:02:12.264559  946842 logs.go:282] 1 containers: [ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922]
	I0127 03:02:12.264624  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:12.268560  946842 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:02:12.268624  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:02:12.310859  946842 cri.go:89] found id: "b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0"
	I0127 03:02:12.310891  946842 cri.go:89] found id: ""
	I0127 03:02:12.310901  946842 logs.go:282] 1 containers: [b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0]
	I0127 03:02:12.310960  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:12.315073  946842 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:02:12.315136  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:02:12.354916  946842 cri.go:89] found id: "3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef"
	I0127 03:02:12.354950  946842 cri.go:89] found id: "de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e"
	I0127 03:02:12.354954  946842 cri.go:89] found id: "9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c"
	I0127 03:02:12.354958  946842 cri.go:89] found id: "6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e"
	I0127 03:02:12.354961  946842 cri.go:89] found id: ""
	I0127 03:02:12.354969  946842 logs.go:282] 4 containers: [3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e 9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c 6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e]
	I0127 03:02:12.355025  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:12.359113  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:12.363014  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:12.366654  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:12.370355  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:02:12.370418  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:02:12.411976  946842 cri.go:89] found id: "a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a"
	I0127 03:02:12.412007  946842 cri.go:89] found id: "627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303"
	I0127 03:02:12.412013  946842 cri.go:89] found id: ""
	I0127 03:02:12.412024  946842 logs.go:282] 2 containers: [a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a 627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303]
	I0127 03:02:12.412095  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:12.418130  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:12.421993  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:02:12.422054  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:02:12.465032  946842 cri.go:89] found id: ""
	I0127 03:02:12.465066  946842 logs.go:282] 0 containers: []
	W0127 03:02:12.465075  946842 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:02:12.465091  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:02:12.465165  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:02:12.499649  946842 cri.go:89] found id: "da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e"
	I0127 03:02:12.499675  946842 cri.go:89] found id: ""
	I0127 03:02:12.499683  946842 logs.go:282] 1 containers: [da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e]
	I0127 03:02:12.499734  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:12.503684  946842 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:02:12.503749  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:02:12.549438  946842 cri.go:89] found id: ""
	I0127 03:02:12.549469  946842 logs.go:282] 0 containers: []
	W0127 03:02:12.549478  946842 logs.go:284] No container was found matching "kindnet"
	I0127 03:02:12.549484  946842 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0127 03:02:12.549545  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0127 03:02:12.588347  946842 cri.go:89] found id: "2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641"
	I0127 03:02:12.588377  946842 cri.go:89] found id: ""
	I0127 03:02:12.588388  946842 logs.go:282] 1 containers: [2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641]
	I0127 03:02:12.588454  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:12.592479  946842 logs.go:123] Gathering logs for coredns [9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c] ...
	I0127 03:02:12.592511  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c"
	I0127 03:02:12.628625  946842 logs.go:123] Gathering logs for coredns [6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e] ...
	I0127 03:02:12.628657  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e"
	I0127 03:02:12.662325  946842 logs.go:123] Gathering logs for coredns [de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e] ...
	I0127 03:02:12.662366  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e"
	I0127 03:02:12.705928  946842 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:02:12.705967  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:02:12.773650  946842 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:02:12.773680  946842 logs.go:123] Gathering logs for etcd [b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0] ...
	I0127 03:02:12.773699  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0"
	I0127 03:02:12.815047  946842 logs.go:123] Gathering logs for dmesg ...
	I0127 03:02:12.815084  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:02:12.829702  946842 logs.go:123] Gathering logs for coredns [3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef] ...
	I0127 03:02:12.829737  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef"
	I0127 03:02:12.873191  946842 logs.go:123] Gathering logs for kube-scheduler [a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a] ...
	I0127 03:02:12.873231  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a"
	I0127 03:02:12.952909  946842 logs.go:123] Gathering logs for kube-scheduler [627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303] ...
	I0127 03:02:12.952962  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303"
	I0127 03:02:12.986269  946842 logs.go:123] Gathering logs for container status ...
	I0127 03:02:12.986302  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:02:13.026452  946842 logs.go:123] Gathering logs for kubelet ...
	I0127 03:02:13.026484  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:02:13.132722  946842 logs.go:123] Gathering logs for kube-controller-manager [da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e] ...
	I0127 03:02:13.132779  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e"
	I0127 03:02:13.188530  946842 logs.go:123] Gathering logs for storage-provisioner [2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641] ...
	I0127 03:02:13.188575  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641"
	I0127 03:02:13.226590  946842 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:02:13.226623  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:02:13.536514  946842 logs.go:123] Gathering logs for kube-apiserver [ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922] ...
	I0127 03:02:13.536555  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922"
	I0127 03:02:13.364226  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:13.376663  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:02:13.376748  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:02:13.415723  948597 cri.go:89] found id: ""
	I0127 03:02:13.415770  948597 logs.go:282] 0 containers: []
	W0127 03:02:13.415784  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:02:13.415793  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:02:13.415894  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:02:13.453997  948597 cri.go:89] found id: ""
	I0127 03:02:13.454026  948597 logs.go:282] 0 containers: []
	W0127 03:02:13.454034  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:02:13.454040  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:02:13.454099  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:02:13.495966  948597 cri.go:89] found id: ""
	I0127 03:02:13.495998  948597 logs.go:282] 0 containers: []
	W0127 03:02:13.496009  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:02:13.496020  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:02:13.496085  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:02:13.533583  948597 cri.go:89] found id: ""
	I0127 03:02:13.533635  948597 logs.go:282] 0 containers: []
	W0127 03:02:13.533649  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:02:13.533659  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:02:13.533738  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:02:13.571359  948597 cri.go:89] found id: ""
	I0127 03:02:13.571392  948597 logs.go:282] 0 containers: []
	W0127 03:02:13.571401  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:02:13.571408  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:02:13.571473  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:02:13.603720  948597 cri.go:89] found id: ""
	I0127 03:02:13.603748  948597 logs.go:282] 0 containers: []
	W0127 03:02:13.603757  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:02:13.603763  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:02:13.603814  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:02:13.635945  948597 cri.go:89] found id: ""
	I0127 03:02:13.635980  948597 logs.go:282] 0 containers: []
	W0127 03:02:13.635991  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:02:13.635999  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:02:13.636091  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:02:13.668778  948597 cri.go:89] found id: ""
	I0127 03:02:13.668807  948597 logs.go:282] 0 containers: []
	W0127 03:02:13.668821  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:02:13.668838  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:02:13.668853  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:02:13.722543  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:02:13.722591  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:02:13.737899  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:02:13.737927  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:02:13.805217  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:02:13.805249  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:02:13.805264  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:02:13.882548  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:02:13.882590  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:02:16.423402  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:16.436808  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:02:16.436895  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:02:16.473315  948597 cri.go:89] found id: ""
	I0127 03:02:16.473350  948597 logs.go:282] 0 containers: []
	W0127 03:02:16.473361  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:02:16.473370  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:02:16.473440  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:02:16.513258  948597 cri.go:89] found id: ""
	I0127 03:02:16.513292  948597 logs.go:282] 0 containers: []
	W0127 03:02:16.513305  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:02:16.513320  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:02:16.513382  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:02:16.550193  948597 cri.go:89] found id: ""
	I0127 03:02:16.550231  948597 logs.go:282] 0 containers: []
	W0127 03:02:16.550242  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:02:16.550250  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:02:16.550316  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:02:16.586397  948597 cri.go:89] found id: ""
	I0127 03:02:16.586430  948597 logs.go:282] 0 containers: []
	W0127 03:02:16.586440  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:02:16.586448  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:02:16.586512  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:02:16.620605  948597 cri.go:89] found id: ""
	I0127 03:02:16.620642  948597 logs.go:282] 0 containers: []
	W0127 03:02:16.620653  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:02:16.620661  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:02:16.620731  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:02:16.657792  948597 cri.go:89] found id: ""
	I0127 03:02:16.657825  948597 logs.go:282] 0 containers: []
	W0127 03:02:16.657837  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:02:16.657846  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:02:16.657915  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:02:16.695941  948597 cri.go:89] found id: ""
	I0127 03:02:16.695976  948597 logs.go:282] 0 containers: []
	W0127 03:02:16.695996  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:02:16.696006  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:02:16.696097  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:02:16.737119  948597 cri.go:89] found id: ""
	I0127 03:02:16.737152  948597 logs.go:282] 0 containers: []
	W0127 03:02:16.737164  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:02:16.737176  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:02:16.737192  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:02:16.774412  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:02:16.774449  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:02:15.075514  949037 pod_ready.go:103] pod "metrics-server-f79f97bbb-2bcfv" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:17.076516  949037 pod_ready.go:103] pod "metrics-server-f79f97bbb-2bcfv" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:15.241377  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:17.235056  947047 pod_ready.go:82] duration metric: took 4m0.000937816s for pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace to be "Ready" ...
	E0127 03:02:17.235088  947047 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0127 03:02:17.235109  947047 pod_ready.go:39] duration metric: took 4m9.543201397s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:02:17.235141  947047 kubeadm.go:597] duration metric: took 4m51.045369992s to restartPrimaryControlPlane
	W0127 03:02:17.235217  947047 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 03:02:17.235246  947047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 03:02:16.077257  946842 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0127 03:02:16.077878  946842 api_server.go:269] stopped: https://192.168.50.96:8443/healthz: Get "https://192.168.50.96:8443/healthz": dial tcp 192.168.50.96:8443: connect: connection refused
	I0127 03:02:16.077946  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:02:16.078006  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:02:16.118971  946842 cri.go:89] found id: "ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922"
	I0127 03:02:16.119004  946842 cri.go:89] found id: ""
	I0127 03:02:16.119015  946842 logs.go:282] 1 containers: [ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922]
	I0127 03:02:16.119083  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:16.123165  946842 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:02:16.123236  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:02:16.159694  946842 cri.go:89] found id: "b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0"
	I0127 03:02:16.159721  946842 cri.go:89] found id: ""
	I0127 03:02:16.159728  946842 logs.go:282] 1 containers: [b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0]
	I0127 03:02:16.159793  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:16.163631  946842 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:02:16.163691  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:02:16.197810  946842 cri.go:89] found id: "3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef"
	I0127 03:02:16.197842  946842 cri.go:89] found id: "de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e"
	I0127 03:02:16.197851  946842 cri.go:89] found id: "9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c"
	I0127 03:02:16.197855  946842 cri.go:89] found id: "6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e"
	I0127 03:02:16.197858  946842 cri.go:89] found id: ""
	I0127 03:02:16.197866  946842 logs.go:282] 4 containers: [3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e 9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c 6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e]
	I0127 03:02:16.197925  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:16.202567  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:16.206595  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:16.210491  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:16.214233  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:02:16.214300  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:02:16.254430  946842 cri.go:89] found id: "a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a"
	I0127 03:02:16.254462  946842 cri.go:89] found id: "627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303"
	I0127 03:02:16.254466  946842 cri.go:89] found id: ""
	I0127 03:02:16.254474  946842 logs.go:282] 2 containers: [a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a 627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303]
	I0127 03:02:16.254543  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:16.259042  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:16.262917  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:02:16.263001  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:02:16.295898  946842 cri.go:89] found id: ""
	I0127 03:02:16.295940  946842 logs.go:282] 0 containers: []
	W0127 03:02:16.295952  946842 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:02:16.295960  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:02:16.296026  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:02:16.334891  946842 cri.go:89] found id: "da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e"
	I0127 03:02:16.334916  946842 cri.go:89] found id: ""
	I0127 03:02:16.334927  946842 logs.go:282] 1 containers: [da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e]
	I0127 03:02:16.335000  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:16.339284  946842 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:02:16.339359  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:02:16.375675  946842 cri.go:89] found id: ""
	I0127 03:02:16.375713  946842 logs.go:282] 0 containers: []
	W0127 03:02:16.375724  946842 logs.go:284] No container was found matching "kindnet"
	I0127 03:02:16.375733  946842 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0127 03:02:16.375833  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0127 03:02:16.412182  946842 cri.go:89] found id: "2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641"
	I0127 03:02:16.412218  946842 cri.go:89] found id: ""
	I0127 03:02:16.412229  946842 logs.go:282] 1 containers: [2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641]
	I0127 03:02:16.412285  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:16.416807  946842 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:02:16.416833  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:02:16.492745  946842 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:02:16.492772  946842 logs.go:123] Gathering logs for kube-scheduler [a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a] ...
	I0127 03:02:16.492789  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a"
	I0127 03:02:16.583192  946842 logs.go:123] Gathering logs for kube-scheduler [627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303] ...
	I0127 03:02:16.583234  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303"
	I0127 03:02:16.622319  946842 logs.go:123] Gathering logs for coredns [6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e] ...
	I0127 03:02:16.622347  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e"
	I0127 03:02:16.657415  946842 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:02:16.657461  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:02:17.001967  946842 logs.go:123] Gathering logs for kubelet ...
	I0127 03:02:17.002017  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:02:17.113958  946842 logs.go:123] Gathering logs for kube-apiserver [ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922] ...
	I0127 03:02:17.114010  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922"
	I0127 03:02:17.154498  946842 logs.go:123] Gathering logs for etcd [b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0] ...
	I0127 03:02:17.154540  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0"
	I0127 03:02:17.196654  946842 logs.go:123] Gathering logs for coredns [3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef] ...
	I0127 03:02:17.196696  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef"
	I0127 03:02:17.247521  946842 logs.go:123] Gathering logs for coredns [de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e] ...
	I0127 03:02:17.247592  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e"
	I0127 03:02:17.297279  946842 logs.go:123] Gathering logs for container status ...
	I0127 03:02:17.297328  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:02:17.359602  946842 logs.go:123] Gathering logs for dmesg ...
	I0127 03:02:17.359648  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:02:17.374733  946842 logs.go:123] Gathering logs for coredns [9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c] ...
	I0127 03:02:17.374783  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c"
	I0127 03:02:17.413470  946842 logs.go:123] Gathering logs for kube-controller-manager [da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e] ...
	I0127 03:02:17.413504  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e"
	I0127 03:02:17.469389  946842 logs.go:123] Gathering logs for storage-provisioner [2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641] ...
	I0127 03:02:17.469429  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641"
	I0127 03:02:20.005099  946842 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0127 03:02:20.005741  946842 api_server.go:269] stopped: https://192.168.50.96:8443/healthz: Get "https://192.168.50.96:8443/healthz": dial tcp 192.168.50.96:8443: connect: connection refused
	I0127 03:02:20.005807  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:02:20.005871  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:02:16.830564  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:02:16.830607  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:02:16.845433  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:02:16.845469  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:02:16.926137  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:02:16.926166  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:02:16.926183  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:02:19.509069  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:19.522347  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:02:19.522429  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:02:19.556817  948597 cri.go:89] found id: ""
	I0127 03:02:19.556856  948597 logs.go:282] 0 containers: []
	W0127 03:02:19.556867  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:02:19.556876  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:02:19.556967  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:02:19.591065  948597 cri.go:89] found id: ""
	I0127 03:02:19.591104  948597 logs.go:282] 0 containers: []
	W0127 03:02:19.591120  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:02:19.591129  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:02:19.591199  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:02:19.626207  948597 cri.go:89] found id: ""
	I0127 03:02:19.626246  948597 logs.go:282] 0 containers: []
	W0127 03:02:19.626260  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:02:19.626266  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:02:19.626320  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:02:19.658517  948597 cri.go:89] found id: ""
	I0127 03:02:19.658551  948597 logs.go:282] 0 containers: []
	W0127 03:02:19.658559  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:02:19.658565  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:02:19.658617  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:02:19.691209  948597 cri.go:89] found id: ""
	I0127 03:02:19.691240  948597 logs.go:282] 0 containers: []
	W0127 03:02:19.691249  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:02:19.691255  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:02:19.691306  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:02:19.728210  948597 cri.go:89] found id: ""
	I0127 03:02:19.728248  948597 logs.go:282] 0 containers: []
	W0127 03:02:19.728260  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:02:19.728270  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:02:19.728332  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:02:19.764049  948597 cri.go:89] found id: ""
	I0127 03:02:19.764083  948597 logs.go:282] 0 containers: []
	W0127 03:02:19.764092  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:02:19.764100  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:02:19.764167  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:02:19.795692  948597 cri.go:89] found id: ""
	I0127 03:02:19.795726  948597 logs.go:282] 0 containers: []
	W0127 03:02:19.795736  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:02:19.795749  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:02:19.795767  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:02:19.808465  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:02:19.808506  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:02:19.879069  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:02:19.879091  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:02:19.879105  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:02:19.960288  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:02:19.960331  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:02:19.997481  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:02:19.997521  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:02:19.576443  949037 pod_ready.go:103] pod "metrics-server-f79f97bbb-2bcfv" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:21.577272  949037 pod_ready.go:103] pod "metrics-server-f79f97bbb-2bcfv" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:20.041630  946842 cri.go:89] found id: "ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922"
	I0127 03:02:20.041665  946842 cri.go:89] found id: ""
	I0127 03:02:20.041675  946842 logs.go:282] 1 containers: [ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922]
	I0127 03:02:20.041740  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:20.045865  946842 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:02:20.045941  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:02:20.083695  946842 cri.go:89] found id: "b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0"
	I0127 03:02:20.083726  946842 cri.go:89] found id: ""
	I0127 03:02:20.083737  946842 logs.go:282] 1 containers: [b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0]
	I0127 03:02:20.083801  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:20.087884  946842 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:02:20.087960  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:02:20.122802  946842 cri.go:89] found id: "3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef"
	I0127 03:02:20.122836  946842 cri.go:89] found id: "de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e"
	I0127 03:02:20.122842  946842 cri.go:89] found id: "9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c"
	I0127 03:02:20.122847  946842 cri.go:89] found id: "6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e"
	I0127 03:02:20.122851  946842 cri.go:89] found id: ""
	I0127 03:02:20.122861  946842 logs.go:282] 4 containers: [3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e 9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c 6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e]
	I0127 03:02:20.122926  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:20.127063  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:20.130869  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:20.134887  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:20.138639  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:02:20.138695  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:02:20.173177  946842 cri.go:89] found id: "a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a"
	I0127 03:02:20.173208  946842 cri.go:89] found id: "627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303"
	I0127 03:02:20.173214  946842 cri.go:89] found id: ""
	I0127 03:02:20.173224  946842 logs.go:282] 2 containers: [a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a 627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303]
	I0127 03:02:20.173276  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:20.177207  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:20.180675  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:02:20.180745  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:02:20.213953  946842 cri.go:89] found id: ""
	I0127 03:02:20.213987  946842 logs.go:282] 0 containers: []
	W0127 03:02:20.213999  946842 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:02:20.214007  946842 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:02:20.214073  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:02:20.249001  946842 cri.go:89] found id: "da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e"
	I0127 03:02:20.249032  946842 cri.go:89] found id: ""
	I0127 03:02:20.249043  946842 logs.go:282] 1 containers: [da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e]
	I0127 03:02:20.249116  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:20.252997  946842 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:02:20.253070  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:02:20.287368  946842 cri.go:89] found id: ""
	I0127 03:02:20.287407  946842 logs.go:282] 0 containers: []
	W0127 03:02:20.287417  946842 logs.go:284] No container was found matching "kindnet"
	I0127 03:02:20.287428  946842 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0127 03:02:20.287495  946842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0127 03:02:20.323494  946842 cri.go:89] found id: "2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641"
	I0127 03:02:20.323525  946842 cri.go:89] found id: ""
	I0127 03:02:20.323537  946842 logs.go:282] 1 containers: [2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641]
	I0127 03:02:20.323601  946842 ssh_runner.go:195] Run: which crictl
	I0127 03:02:20.327499  946842 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:02:20.327529  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:02:20.392660  946842 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:02:20.392689  946842 logs.go:123] Gathering logs for kube-apiserver [ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922] ...
	I0127 03:02:20.392711  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba2618a2e1edacc8a97d23e263de616deae67f3183d23e0d4844c7cce7b82922"
	I0127 03:02:20.432549  946842 logs.go:123] Gathering logs for coredns [3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef] ...
	I0127 03:02:20.432593  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e120ca493e49a009872d1d26c4175e2615eff4bd6f0ea377788a068703010ef"
	I0127 03:02:20.489110  946842 logs.go:123] Gathering logs for kube-scheduler [627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303] ...
	I0127 03:02:20.489151  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 627270e7f7b9684ce4cb3e63dbb29e8568535cfd6eca5fbfd2a0411abe5fb303"
	I0127 03:02:20.528429  946842 logs.go:123] Gathering logs for kube-controller-manager [da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e] ...
	I0127 03:02:20.528469  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da85abe10bc736c1b4b05269b7b1bfb0d58b3b438aa46674109996160579425e"
	I0127 03:02:20.586466  946842 logs.go:123] Gathering logs for storage-provisioner [2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641] ...
	I0127 03:02:20.586500  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ab1e28d12fa9138ac917f4aaac9d42f8f160c2266a2807213650b7115416641"
	I0127 03:02:20.620089  946842 logs.go:123] Gathering logs for dmesg ...
	I0127 03:02:20.620126  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:02:20.634727  946842 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:02:20.634770  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:02:20.993737  946842 logs.go:123] Gathering logs for coredns [de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e] ...
	I0127 03:02:20.993784  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de25a9884aa6e056342b2cd84a5639e7f2f6d2eebda08f1022b98fe6377d1e7e"
	I0127 03:02:21.034401  946842 logs.go:123] Gathering logs for coredns [6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e] ...
	I0127 03:02:21.034441  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6897ed45842a06a4fbaaaf78803e3ca5373843ee837a2ba7ce899cbdec8ea12e"
	I0127 03:02:21.069535  946842 logs.go:123] Gathering logs for kubelet ...
	I0127 03:02:21.069565  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:02:21.174498  946842 logs.go:123] Gathering logs for coredns [9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c] ...
	I0127 03:02:21.174540  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d8f418d3b238c10259e6f461b2478cc8a6f459617f3a9b901553be60217c11c"
	I0127 03:02:21.208816  946842 logs.go:123] Gathering logs for kube-scheduler [a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a] ...
	I0127 03:02:21.208849  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a38af713bc2f73b7c74c2f4881eb77784d66bb772568de733ebe6f7fe94ee20a"
	I0127 03:02:21.287447  946842 logs.go:123] Gathering logs for container status ...
	I0127 03:02:21.287493  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:02:21.324147  946842 logs.go:123] Gathering logs for etcd [b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0] ...
	I0127 03:02:21.324179  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9934fdf41e0f09ad5da699e985a202af806bf1c08786bf4c02a48e9d53b97e0"
	I0127 03:02:23.867056  946842 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0127 03:02:23.867705  946842 api_server.go:269] stopped: https://192.168.50.96:8443/healthz: Get "https://192.168.50.96:8443/healthz": dial tcp 192.168.50.96:8443: connect: connection refused
	I0127 03:02:23.867772  946842 kubeadm.go:597] duration metric: took 4m17.691472182s to restartPrimaryControlPlane
	W0127 03:02:23.867840  946842 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 03:02:23.867867  946842 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 03:02:22.551421  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:22.567026  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:02:22.567121  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:02:22.615737  948597 cri.go:89] found id: ""
	I0127 03:02:22.615773  948597 logs.go:282] 0 containers: []
	W0127 03:02:22.615782  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:02:22.615788  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:02:22.615858  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:02:22.659753  948597 cri.go:89] found id: ""
	I0127 03:02:22.659798  948597 logs.go:282] 0 containers: []
	W0127 03:02:22.659810  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:02:22.659817  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:02:22.659891  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:02:22.693156  948597 cri.go:89] found id: ""
	I0127 03:02:22.693192  948597 logs.go:282] 0 containers: []
	W0127 03:02:22.693203  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:02:22.693210  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:02:22.693288  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:02:22.725239  948597 cri.go:89] found id: ""
	I0127 03:02:22.725268  948597 logs.go:282] 0 containers: []
	W0127 03:02:22.725278  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:02:22.725284  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:02:22.725340  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:02:22.760821  948597 cri.go:89] found id: ""
	I0127 03:02:22.760861  948597 logs.go:282] 0 containers: []
	W0127 03:02:22.760874  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:02:22.760883  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:02:22.760977  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:02:22.793734  948597 cri.go:89] found id: ""
	I0127 03:02:22.793763  948597 logs.go:282] 0 containers: []
	W0127 03:02:22.793772  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:02:22.793789  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:02:22.793875  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:02:22.827763  948597 cri.go:89] found id: ""
	I0127 03:02:22.827803  948597 logs.go:282] 0 containers: []
	W0127 03:02:22.827814  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:02:22.827820  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:02:22.827882  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:02:22.863065  948597 cri.go:89] found id: ""
	I0127 03:02:22.863108  948597 logs.go:282] 0 containers: []
	W0127 03:02:22.863120  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:02:22.863132  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:02:22.863145  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:02:22.910867  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:02:22.910913  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:02:22.924232  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:02:22.924263  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:02:22.990323  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:02:22.990345  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:02:22.990358  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:02:23.069076  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:02:23.069138  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:02:25.607860  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:25.621115  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:02:25.621189  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:02:25.655019  948597 cri.go:89] found id: ""
	I0127 03:02:25.655062  948597 logs.go:282] 0 containers: []
	W0127 03:02:25.655074  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:02:25.655083  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:02:25.655158  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:02:25.688118  948597 cri.go:89] found id: ""
	I0127 03:02:25.688149  948597 logs.go:282] 0 containers: []
	W0127 03:02:25.688158  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:02:25.688165  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:02:25.688218  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:02:25.719961  948597 cri.go:89] found id: ""
	I0127 03:02:25.719995  948597 logs.go:282] 0 containers: []
	W0127 03:02:25.720006  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:02:25.720013  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:02:25.720066  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:02:25.751757  948597 cri.go:89] found id: ""
	I0127 03:02:25.751793  948597 logs.go:282] 0 containers: []
	W0127 03:02:25.751805  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:02:25.751813  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:02:25.751874  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:02:25.785054  948597 cri.go:89] found id: ""
	I0127 03:02:25.785090  948597 logs.go:282] 0 containers: []
	W0127 03:02:25.785102  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:02:25.785111  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:02:25.785192  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:02:25.818010  948597 cri.go:89] found id: ""
	I0127 03:02:25.818046  948597 logs.go:282] 0 containers: []
	W0127 03:02:25.818054  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:02:25.818060  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:02:25.818127  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:02:25.849718  948597 cri.go:89] found id: ""
	I0127 03:02:25.849757  948597 logs.go:282] 0 containers: []
	W0127 03:02:25.849768  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:02:25.849776  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:02:25.849837  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:02:25.891145  948597 cri.go:89] found id: ""
	I0127 03:02:25.891185  948597 logs.go:282] 0 containers: []
	W0127 03:02:25.891197  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:02:25.891210  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:02:25.891230  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:02:25.969368  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:02:25.969411  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:02:26.009100  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:02:26.009142  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:02:26.054519  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:02:26.054562  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:02:26.067846  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:02:26.067879  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:02:26.142789  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:02:24.075444  949037 pod_ready.go:103] pod "metrics-server-f79f97bbb-2bcfv" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:26.077017  949037 pod_ready.go:103] pod "metrics-server-f79f97bbb-2bcfv" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:28.643898  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:28.656621  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:02:28.656692  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:02:28.698197  948597 cri.go:89] found id: ""
	I0127 03:02:28.698228  948597 logs.go:282] 0 containers: []
	W0127 03:02:28.698235  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:02:28.698242  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:02:28.698301  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:02:28.730375  948597 cri.go:89] found id: ""
	I0127 03:02:28.730412  948597 logs.go:282] 0 containers: []
	W0127 03:02:28.730424  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:02:28.730432  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:02:28.730500  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:02:28.764820  948597 cri.go:89] found id: ""
	I0127 03:02:28.764863  948597 logs.go:282] 0 containers: []
	W0127 03:02:28.764879  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:02:28.764887  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:02:28.764983  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:02:28.796878  948597 cri.go:89] found id: ""
	I0127 03:02:28.796912  948597 logs.go:282] 0 containers: []
	W0127 03:02:28.796941  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:02:28.796950  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:02:28.797012  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:02:28.830844  948597 cri.go:89] found id: ""
	I0127 03:02:28.830888  948597 logs.go:282] 0 containers: []
	W0127 03:02:28.830897  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:02:28.830903  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:02:28.830959  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:02:28.863229  948597 cri.go:89] found id: ""
	I0127 03:02:28.863261  948597 logs.go:282] 0 containers: []
	W0127 03:02:28.863272  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:02:28.863280  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:02:28.863341  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:02:28.900738  948597 cri.go:89] found id: ""
	I0127 03:02:28.900780  948597 logs.go:282] 0 containers: []
	W0127 03:02:28.900792  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:02:28.900800  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:02:28.900873  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:02:28.934622  948597 cri.go:89] found id: ""
	I0127 03:02:28.934663  948597 logs.go:282] 0 containers: []
	W0127 03:02:28.934674  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:02:28.934690  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:02:28.934707  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:02:29.014874  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:02:29.014922  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:02:29.066883  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:02:29.066916  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:02:29.121381  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:02:29.121424  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:02:29.135916  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:02:29.135950  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:02:29.201815  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:02:31.702259  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:31.715374  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:02:31.715452  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:02:31.748460  948597 cri.go:89] found id: ""
	I0127 03:02:31.748496  948597 logs.go:282] 0 containers: []
	W0127 03:02:31.748508  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:02:31.748517  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:02:31.748587  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:02:31.780124  948597 cri.go:89] found id: ""
	I0127 03:02:31.780161  948597 logs.go:282] 0 containers: []
	W0127 03:02:31.780173  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:02:31.780180  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:02:31.780247  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:02:28.575935  949037 pod_ready.go:103] pod "metrics-server-f79f97bbb-2bcfv" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:31.076435  949037 pod_ready.go:103] pod "metrics-server-f79f97bbb-2bcfv" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:31.816546  948597 cri.go:89] found id: ""
	I0127 03:02:31.816579  948597 logs.go:282] 0 containers: []
	W0127 03:02:31.816592  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:02:31.816599  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:02:31.816667  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:02:31.849343  948597 cri.go:89] found id: ""
	I0127 03:02:31.849377  948597 logs.go:282] 0 containers: []
	W0127 03:02:31.849388  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:02:31.849395  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:02:31.849466  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:02:31.881664  948597 cri.go:89] found id: ""
	I0127 03:02:31.881694  948597 logs.go:282] 0 containers: []
	W0127 03:02:31.881703  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:02:31.881710  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:02:31.881764  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:02:31.919480  948597 cri.go:89] found id: ""
	I0127 03:02:31.919518  948597 logs.go:282] 0 containers: []
	W0127 03:02:31.919528  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:02:31.919536  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:02:31.919603  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:02:31.952360  948597 cri.go:89] found id: ""
	I0127 03:02:31.952389  948597 logs.go:282] 0 containers: []
	W0127 03:02:31.952397  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:02:31.952403  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:02:31.952456  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:02:31.987865  948597 cri.go:89] found id: ""
	I0127 03:02:31.987895  948597 logs.go:282] 0 containers: []
	W0127 03:02:31.987903  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:02:31.987914  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:02:31.987927  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:02:32.001095  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:02:32.001130  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:02:32.071197  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:02:32.071229  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:02:32.071246  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:02:32.157042  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:02:32.157089  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:02:32.195293  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:02:32.195328  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:02:34.747191  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:34.759950  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:02:34.760017  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:02:34.794269  948597 cri.go:89] found id: ""
	I0127 03:02:34.794300  948597 logs.go:282] 0 containers: []
	W0127 03:02:34.794309  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:02:34.794316  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:02:34.794372  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:02:34.833580  948597 cri.go:89] found id: ""
	I0127 03:02:34.833617  948597 logs.go:282] 0 containers: []
	W0127 03:02:34.833629  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:02:34.833637  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:02:34.833705  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:02:34.868608  948597 cri.go:89] found id: ""
	I0127 03:02:34.868640  948597 logs.go:282] 0 containers: []
	W0127 03:02:34.868649  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:02:34.868655  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:02:34.868718  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:02:34.901502  948597 cri.go:89] found id: ""
	I0127 03:02:34.901534  948597 logs.go:282] 0 containers: []
	W0127 03:02:34.901544  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:02:34.901550  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:02:34.901603  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:02:34.935196  948597 cri.go:89] found id: ""
	I0127 03:02:34.935231  948597 logs.go:282] 0 containers: []
	W0127 03:02:34.935243  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:02:34.935252  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:02:34.935317  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:02:34.970481  948597 cri.go:89] found id: ""
	I0127 03:02:34.970521  948597 logs.go:282] 0 containers: []
	W0127 03:02:34.970534  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:02:34.970544  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:02:34.970611  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:02:35.003207  948597 cri.go:89] found id: ""
	I0127 03:02:35.003243  948597 logs.go:282] 0 containers: []
	W0127 03:02:35.003255  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:02:35.003270  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:02:35.003328  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:02:35.036258  948597 cri.go:89] found id: ""
	I0127 03:02:35.036289  948597 logs.go:282] 0 containers: []
	W0127 03:02:35.036298  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:02:35.036318  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:02:35.036336  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:02:35.090186  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:02:35.090225  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:02:35.103908  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:02:35.103942  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:02:35.174212  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:02:35.174237  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:02:35.174251  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:02:35.248068  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:02:35.248111  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:02:33.578455  949037 pod_ready.go:103] pod "metrics-server-f79f97bbb-2bcfv" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:36.075903  949037 pod_ready.go:103] pod "metrics-server-f79f97bbb-2bcfv" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:38.076694  949037 pod_ready.go:103] pod "metrics-server-f79f97bbb-2bcfv" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:37.046060  946842 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.178160656s)
	I0127 03:02:37.046189  946842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 03:02:37.062736  946842 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 03:02:37.073192  946842 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 03:02:37.083019  946842 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 03:02:37.083044  946842 kubeadm.go:157] found existing configuration files:
	
	I0127 03:02:37.083096  946842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 03:02:37.092069  946842 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 03:02:37.092145  946842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 03:02:37.101492  946842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 03:02:37.110550  946842 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 03:02:37.110621  946842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 03:02:37.119720  946842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 03:02:37.128701  946842 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 03:02:37.128757  946842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 03:02:37.138069  946842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 03:02:37.146824  946842 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 03:02:37.146897  946842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 03:02:37.156174  946842 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 03:02:37.325953  946842 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 03:02:37.785610  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:37.798369  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:02:37.798457  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:02:37.830553  948597 cri.go:89] found id: ""
	I0127 03:02:37.830593  948597 logs.go:282] 0 containers: []
	W0127 03:02:37.830605  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:02:37.830615  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:02:37.830679  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:02:37.861930  948597 cri.go:89] found id: ""
	I0127 03:02:37.861964  948597 logs.go:282] 0 containers: []
	W0127 03:02:37.861973  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:02:37.861979  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:02:37.862040  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:02:37.893267  948597 cri.go:89] found id: ""
	I0127 03:02:37.893302  948597 logs.go:282] 0 containers: []
	W0127 03:02:37.893314  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:02:37.893323  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:02:37.893382  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:02:37.929928  948597 cri.go:89] found id: ""
	I0127 03:02:37.929958  948597 logs.go:282] 0 containers: []
	W0127 03:02:37.929967  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:02:37.929973  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:02:37.930034  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:02:37.964592  948597 cri.go:89] found id: ""
	I0127 03:02:37.964622  948597 logs.go:282] 0 containers: []
	W0127 03:02:37.964631  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:02:37.964637  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:02:37.964707  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:02:37.997396  948597 cri.go:89] found id: ""
	I0127 03:02:37.997434  948597 logs.go:282] 0 containers: []
	W0127 03:02:37.997443  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:02:37.997450  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:02:37.997512  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:02:38.030060  948597 cri.go:89] found id: ""
	I0127 03:02:38.030094  948597 logs.go:282] 0 containers: []
	W0127 03:02:38.030106  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:02:38.030116  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:02:38.030184  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:02:38.068588  948597 cri.go:89] found id: ""
	I0127 03:02:38.068616  948597 logs.go:282] 0 containers: []
	W0127 03:02:38.068624  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:02:38.068635  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:02:38.068647  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:02:38.122002  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:02:38.122059  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:02:38.137266  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:02:38.137304  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:02:38.214548  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:02:38.214578  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:02:38.214597  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:02:38.294408  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:02:38.294453  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:02:40.845126  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:40.858786  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:02:40.858871  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:02:40.897021  948597 cri.go:89] found id: ""
	I0127 03:02:40.897063  948597 logs.go:282] 0 containers: []
	W0127 03:02:40.897076  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:02:40.897084  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:02:40.897161  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:02:40.937138  948597 cri.go:89] found id: ""
	I0127 03:02:40.937173  948597 logs.go:282] 0 containers: []
	W0127 03:02:40.937185  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:02:40.937193  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:02:40.937258  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:02:40.974746  948597 cri.go:89] found id: ""
	I0127 03:02:40.974780  948597 logs.go:282] 0 containers: []
	W0127 03:02:40.974792  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:02:40.974800  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:02:40.974872  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:02:41.011838  948597 cri.go:89] found id: ""
	I0127 03:02:41.011869  948597 logs.go:282] 0 containers: []
	W0127 03:02:41.011880  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:02:41.011888  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:02:41.011961  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:02:41.047294  948597 cri.go:89] found id: ""
	I0127 03:02:41.047325  948597 logs.go:282] 0 containers: []
	W0127 03:02:41.047337  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:02:41.047344  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:02:41.047426  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:02:41.082188  948597 cri.go:89] found id: ""
	I0127 03:02:41.082222  948597 logs.go:282] 0 containers: []
	W0127 03:02:41.082234  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:02:41.082241  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:02:41.082311  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:02:41.117046  948597 cri.go:89] found id: ""
	I0127 03:02:41.117082  948597 logs.go:282] 0 containers: []
	W0127 03:02:41.117093  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:02:41.117099  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:02:41.117169  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:02:41.154963  948597 cri.go:89] found id: ""
	I0127 03:02:41.154995  948597 logs.go:282] 0 containers: []
	W0127 03:02:41.155004  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:02:41.155014  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:02:41.155027  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:02:41.206373  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:02:41.206443  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:02:41.222908  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:02:41.222940  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:02:41.300876  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:02:41.300903  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:02:41.300936  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:02:41.381123  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:02:41.381165  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:02:40.077724  949037 pod_ready.go:103] pod "metrics-server-f79f97bbb-2bcfv" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:42.576268  949037 pod_ready.go:103] pod "metrics-server-f79f97bbb-2bcfv" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:44.949366  947047 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.71408098s)
	I0127 03:02:44.949471  947047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 03:02:45.160633  946842 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 03:02:45.160688  946842 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 03:02:45.160761  946842 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 03:02:45.160892  946842 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 03:02:45.161064  946842 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 03:02:45.161147  946842 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 03:02:45.162529  946842 out.go:235]   - Generating certificates and keys ...
	I0127 03:02:45.162620  946842 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 03:02:45.162715  946842 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 03:02:45.162854  946842 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 03:02:45.162944  946842 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 03:02:45.163055  946842 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 03:02:45.163138  946842 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 03:02:45.163243  946842 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 03:02:45.163336  946842 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 03:02:45.163438  946842 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 03:02:45.163557  946842 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 03:02:45.163617  946842 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 03:02:45.163703  946842 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 03:02:45.163766  946842 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 03:02:45.163838  946842 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 03:02:45.163907  946842 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 03:02:45.163978  946842 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 03:02:45.164062  946842 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 03:02:45.164175  946842 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 03:02:45.164257  946842 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 03:02:45.166169  946842 out.go:235]   - Booting up control plane ...
	I0127 03:02:45.166287  946842 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 03:02:45.166398  946842 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 03:02:45.166495  946842 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 03:02:45.166658  946842 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 03:02:45.166794  946842 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 03:02:45.166866  946842 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 03:02:45.167003  946842 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 03:02:45.167172  946842 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 03:02:45.167257  946842 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.003749311s
	I0127 03:02:45.167363  946842 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 03:02:45.167445  946842 kubeadm.go:310] [api-check] The API server is healthy after 4.501857958s
	I0127 03:02:45.167595  946842 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 03:02:45.167726  946842 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 03:02:45.167821  946842 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 03:02:45.168095  946842 kubeadm.go:310] [mark-control-plane] Marking the node kubernetes-upgrade-080871 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 03:02:45.168187  946842 kubeadm.go:310] [bootstrap-token] Using token: tkurd6.ccoz09p0n9mtvh8u
	I0127 03:02:45.169523  946842 out.go:235]   - Configuring RBAC rules ...
	I0127 03:02:45.169678  946842 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 03:02:45.169793  946842 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 03:02:45.169987  946842 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 03:02:45.170172  946842 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 03:02:45.170343  946842 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 03:02:45.170460  946842 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 03:02:45.170589  946842 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 03:02:45.170668  946842 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 03:02:45.170737  946842 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 03:02:45.170746  946842 kubeadm.go:310] 
	I0127 03:02:45.170827  946842 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 03:02:45.170836  946842 kubeadm.go:310] 
	I0127 03:02:45.170931  946842 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 03:02:45.170939  946842 kubeadm.go:310] 
	I0127 03:02:45.170974  946842 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 03:02:45.171079  946842 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 03:02:45.171162  946842 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 03:02:45.171172  946842 kubeadm.go:310] 
	I0127 03:02:45.171244  946842 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 03:02:45.171253  946842 kubeadm.go:310] 
	I0127 03:02:45.171289  946842 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 03:02:45.171295  946842 kubeadm.go:310] 
	I0127 03:02:45.171357  946842 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 03:02:45.171459  946842 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 03:02:45.171556  946842 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 03:02:45.171566  946842 kubeadm.go:310] 
	I0127 03:02:45.171666  946842 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 03:02:45.171739  946842 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 03:02:45.171745  946842 kubeadm.go:310] 
	I0127 03:02:45.171856  946842 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token tkurd6.ccoz09p0n9mtvh8u \
	I0127 03:02:45.172019  946842 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9e03cefec8c3986c8071c28cf6fb7c7bbd45830c579dcdbae8e6ec1676091320 \
	I0127 03:02:45.172067  946842 kubeadm.go:310] 	--control-plane 
	I0127 03:02:45.172084  946842 kubeadm.go:310] 
	I0127 03:02:45.172206  946842 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 03:02:45.172214  946842 kubeadm.go:310] 
	I0127 03:02:45.172314  946842 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tkurd6.ccoz09p0n9mtvh8u \
	I0127 03:02:45.172466  946842 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9e03cefec8c3986c8071c28cf6fb7c7bbd45830c579dcdbae8e6ec1676091320 
	I0127 03:02:45.172481  946842 cni.go:84] Creating CNI manager for ""
	I0127 03:02:45.172489  946842 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 03:02:45.173812  946842 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 03:02:44.969346  947047 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 03:02:44.986681  947047 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 03:02:45.001060  947047 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 03:02:45.001090  947047 kubeadm.go:157] found existing configuration files:
	
	I0127 03:02:45.001154  947047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 03:02:45.013568  947047 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 03:02:45.013643  947047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 03:02:45.035139  947047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 03:02:45.047379  947047 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 03:02:45.047453  947047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 03:02:45.064159  947047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 03:02:45.078334  947047 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 03:02:45.078409  947047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 03:02:45.098888  947047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 03:02:45.108304  947047 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 03:02:45.108377  947047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 03:02:45.117596  947047 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 03:02:45.173805  947047 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 03:02:45.173965  947047 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 03:02:45.288767  947047 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 03:02:45.288975  947047 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 03:02:45.289110  947047 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 03:02:45.301044  947047 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 03:02:45.175031  946842 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 03:02:45.185741  946842 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 03:02:45.203745  946842 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 03:02:45.203775  946842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:45.203791  946842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kubernetes-upgrade-080871 minikube.k8s.io/updated_at=2025_01_27T03_02_45_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=6bb462d349d93b9bf1c5a4f87817e5e9ea11cc95 minikube.k8s.io/name=kubernetes-upgrade-080871 minikube.k8s.io/primary=true
	I0127 03:02:45.369100  946842 kubeadm.go:1113] duration metric: took 165.40031ms to wait for elevateKubeSystemPrivileges
	I0127 03:02:45.369160  946842 ops.go:34] apiserver oom_adj: -16
	I0127 03:02:45.369195  946842 kubeadm.go:394] duration metric: took 4m39.463906575s to StartCluster
	I0127 03:02:45.369247  946842 settings.go:142] acquiring lock: {Name:mk329e04da29656a59534015ea2d4cccfe5debac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:02:45.369348  946842 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20316-897624/kubeconfig
	I0127 03:02:45.371811  946842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/kubeconfig: {Name:mkcd87672341028e989830590061bc5270733978 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:02:45.372109  946842 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.96 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 03:02:45.372825  946842 config.go:182] Loaded profile config "kubernetes-upgrade-080871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 03:02:45.372191  946842 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 03:02:45.372942  946842 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-080871"
	I0127 03:02:45.372966  946842 addons.go:238] Setting addon storage-provisioner=true in "kubernetes-upgrade-080871"
	W0127 03:02:45.372977  946842 addons.go:247] addon storage-provisioner should already be in state true
	I0127 03:02:45.373013  946842 host.go:66] Checking if "kubernetes-upgrade-080871" exists ...
	I0127 03:02:45.373563  946842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:02:45.373611  946842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:45.373700  946842 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-080871"
	I0127 03:02:45.373732  946842 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-080871"
	I0127 03:02:45.374272  946842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:02:45.374315  946842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:45.374830  946842 out.go:177] * Verifying Kubernetes components...
	I0127 03:02:45.376155  946842 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 03:02:45.391362  946842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40387
	I0127 03:02:45.391914  946842 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:45.392502  946842 main.go:141] libmachine: Using API Version  1
	I0127 03:02:45.392528  946842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:45.392781  946842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43127
	I0127 03:02:45.392995  946842 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:45.393176  946842 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:45.393635  946842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:02:45.393693  946842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:45.394356  946842 main.go:141] libmachine: Using API Version  1
	I0127 03:02:45.394384  946842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:45.394950  946842 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:45.395171  946842 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetState
	I0127 03:02:45.398679  946842 kapi.go:59] client config for kubernetes-upgrade-080871: &rest.Config{Host:"https://192.168.50.96:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20316-897624/.minikube/profiles/kubernetes-upgrade-080871/client.crt", KeyFile:"/home/jenkins/minikube-integration/20316-897624/.minikube/profiles/kubernetes-upgrade-080871/client.key", CAFile:"/home/jenkins/minikube-integration/20316-897624/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c3e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0127 03:02:45.399080  946842 addons.go:238] Setting addon default-storageclass=true in "kubernetes-upgrade-080871"
	W0127 03:02:45.399118  946842 addons.go:247] addon default-storageclass should already be in state true
	I0127 03:02:45.399154  946842 host.go:66] Checking if "kubernetes-upgrade-080871" exists ...
	I0127 03:02:45.399550  946842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:02:45.399604  946842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:45.410499  946842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46067
	I0127 03:02:45.411082  946842 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:45.411757  946842 main.go:141] libmachine: Using API Version  1
	I0127 03:02:45.411783  946842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:45.412186  946842 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:45.412430  946842 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetState
	I0127 03:02:45.414112  946842 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .DriverName
	I0127 03:02:45.414593  946842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33233
	I0127 03:02:45.414931  946842 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:45.415361  946842 main.go:141] libmachine: Using API Version  1
	I0127 03:02:45.415379  946842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:45.415807  946842 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:45.416461  946842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:02:45.416512  946842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:45.416538  946842 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 03:02:45.417840  946842 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 03:02:45.417856  946842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 03:02:45.417870  946842 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHHostname
	I0127 03:02:45.420425  946842 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 03:02:45.420877  946842 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:19:7f", ip: ""} in network mk-kubernetes-upgrade-080871: {Iface:virbr2 ExpiryTime:2025-01-27 03:55:53 +0000 UTC Type:0 Mac:52:54:00:ea:19:7f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:kubernetes-upgrade-080871 Clientid:01:52:54:00:ea:19:7f}
	I0127 03:02:45.420899  946842 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined IP address 192.168.50.96 and MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 03:02:45.421057  946842 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHPort
	I0127 03:02:45.421234  946842 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHKeyPath
	I0127 03:02:45.421393  946842 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHUsername
	I0127 03:02:45.421543  946842 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/kubernetes-upgrade-080871/id_rsa Username:docker}
	I0127 03:02:45.433458  946842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36789
	I0127 03:02:45.433882  946842 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:45.434402  946842 main.go:141] libmachine: Using API Version  1
	I0127 03:02:45.434421  946842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:45.434713  946842 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:45.434925  946842 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetState
	I0127 03:02:45.436561  946842 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .DriverName
	I0127 03:02:45.436780  946842 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 03:02:45.436795  946842 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 03:02:45.436809  946842 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHHostname
	I0127 03:02:45.439085  946842 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 03:02:45.439403  946842 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:19:7f", ip: ""} in network mk-kubernetes-upgrade-080871: {Iface:virbr2 ExpiryTime:2025-01-27 03:55:53 +0000 UTC Type:0 Mac:52:54:00:ea:19:7f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:kubernetes-upgrade-080871 Clientid:01:52:54:00:ea:19:7f}
	I0127 03:02:45.439425  946842 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | domain kubernetes-upgrade-080871 has defined IP address 192.168.50.96 and MAC address 52:54:00:ea:19:7f in network mk-kubernetes-upgrade-080871
	I0127 03:02:45.439592  946842 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHPort
	I0127 03:02:45.439767  946842 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHKeyPath
	I0127 03:02:45.439864  946842 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .GetSSHUsername
	I0127 03:02:45.440009  946842 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/kubernetes-upgrade-080871/id_rsa Username:docker}
	I0127 03:02:45.572855  946842 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 03:02:45.597688  946842 api_server.go:52] waiting for apiserver process to appear ...
	I0127 03:02:45.597791  946842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:45.634876  946842 api_server.go:72] duration metric: took 262.715974ms to wait for apiserver process to appear ...
	I0127 03:02:45.634909  946842 api_server.go:88] waiting for apiserver healthz status ...
	I0127 03:02:45.634935  946842 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0127 03:02:45.664056  946842 api_server.go:279] https://192.168.50.96:8443/healthz returned 200:
	ok
	I0127 03:02:45.694147  946842 api_server.go:141] control plane version: v1.32.1
	I0127 03:02:45.694189  946842 api_server.go:131] duration metric: took 59.27144ms to wait for apiserver health ...
	I0127 03:02:45.694200  946842 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 03:02:45.694289  946842 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0127 03:02:45.694309  946842 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0127 03:02:45.708611  946842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 03:02:45.737061  946842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 03:02:45.745647  946842 system_pods.go:59] 4 kube-system pods found
	I0127 03:02:45.745691  946842 system_pods.go:61] "etcd-kubernetes-upgrade-080871" [e4872793-d506-4c8b-a2b2-bc4fa6d4eb4e] Pending
	I0127 03:02:45.745700  946842 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-080871" [64f6a7c9-29aa-402a-9b89-446b2237f195] Pending
	I0127 03:02:45.745706  946842 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-080871" [179faead-2972-479f-8e48-bb897b350b11] Pending
	I0127 03:02:45.745712  946842 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-080871" [5ffece78-90e2-4125-b849-be0ae6ad9853] Pending
	I0127 03:02:45.745720  946842 system_pods.go:74] duration metric: took 51.512635ms to wait for pod list to return data ...
	I0127 03:02:45.745738  946842 kubeadm.go:582] duration metric: took 373.588301ms to wait for: map[apiserver:true system_pods:true]
	I0127 03:02:45.745756  946842 node_conditions.go:102] verifying NodePressure condition ...
	I0127 03:02:45.762774  946842 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 03:02:45.762818  946842 node_conditions.go:123] node cpu capacity is 2
	I0127 03:02:45.762832  946842 node_conditions.go:105] duration metric: took 17.07061ms to run NodePressure ...
	I0127 03:02:45.762847  946842 start.go:241] waiting for startup goroutines ...
	I0127 03:02:46.162180  946842 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:46.162212  946842 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .Close
	I0127 03:02:46.162272  946842 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:46.162299  946842 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .Close
	I0127 03:02:46.162516  946842 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:46.162535  946842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:46.162548  946842 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:46.162555  946842 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .Close
	I0127 03:02:46.162567  946842 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | Closing plugin on server side
	I0127 03:02:46.162635  946842 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:46.162648  946842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:46.162661  946842 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:46.162698  946842 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .Close
	I0127 03:02:46.162963  946842 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | Closing plugin on server side
	I0127 03:02:46.162964  946842 main.go:141] libmachine: (kubernetes-upgrade-080871) DBG | Closing plugin on server side
	I0127 03:02:46.162999  946842 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:46.163001  946842 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:46.163018  946842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:46.163020  946842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:46.176003  946842 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:46.176023  946842 main.go:141] libmachine: (kubernetes-upgrade-080871) Calling .Close
	I0127 03:02:46.176273  946842 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:46.176294  946842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:46.179032  946842 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0127 03:02:45.303322  947047 out.go:235]   - Generating certificates and keys ...
	I0127 03:02:45.303439  947047 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 03:02:45.303532  947047 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 03:02:45.303666  947047 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 03:02:45.303760  947047 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 03:02:45.303856  947047 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 03:02:45.303922  947047 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 03:02:45.304005  947047 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 03:02:45.304087  947047 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 03:02:45.304676  947047 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 03:02:45.304799  947047 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 03:02:45.304859  947047 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 03:02:45.304969  947047 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 03:02:45.475219  947047 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 03:02:45.585607  947047 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 03:02:45.731196  947047 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 03:02:46.013377  947047 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 03:02:46.186513  947047 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 03:02:46.187171  947047 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 03:02:46.190790  947047 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 03:02:46.180146  946842 addons.go:514] duration metric: took 807.967007ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0127 03:02:46.180189  946842 start.go:246] waiting for cluster config update ...
	I0127 03:02:46.180205  946842 start.go:255] writing updated cluster config ...
	I0127 03:02:46.180492  946842 ssh_runner.go:195] Run: rm -f paused
	I0127 03:02:46.237096  946842 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 03:02:46.238702  946842 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-080871" cluster and "default" namespace by default
	I0127 03:02:43.921070  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:43.937054  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:02:43.937144  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:02:43.974834  948597 cri.go:89] found id: ""
	I0127 03:02:43.974869  948597 logs.go:282] 0 containers: []
	W0127 03:02:43.974880  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:02:43.974889  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:02:43.974953  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:02:44.008986  948597 cri.go:89] found id: ""
	I0127 03:02:44.009027  948597 logs.go:282] 0 containers: []
	W0127 03:02:44.009062  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:02:44.009072  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:02:44.009160  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:02:44.040585  948597 cri.go:89] found id: ""
	I0127 03:02:44.040616  948597 logs.go:282] 0 containers: []
	W0127 03:02:44.040625  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:02:44.040631  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:02:44.040703  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:02:44.079406  948597 cri.go:89] found id: ""
	I0127 03:02:44.079432  948597 logs.go:282] 0 containers: []
	W0127 03:02:44.079439  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:02:44.079445  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:02:44.079495  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:02:44.112089  948597 cri.go:89] found id: ""
	I0127 03:02:44.112118  948597 logs.go:282] 0 containers: []
	W0127 03:02:44.112134  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:02:44.112144  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:02:44.112206  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:02:44.145509  948597 cri.go:89] found id: ""
	I0127 03:02:44.145544  948597 logs.go:282] 0 containers: []
	W0127 03:02:44.145555  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:02:44.145563  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:02:44.145643  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:02:44.186775  948597 cri.go:89] found id: ""
	I0127 03:02:44.186804  948597 logs.go:282] 0 containers: []
	W0127 03:02:44.186823  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:02:44.186830  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:02:44.186890  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:02:44.221445  948597 cri.go:89] found id: ""
	I0127 03:02:44.221483  948597 logs.go:282] 0 containers: []
	W0127 03:02:44.221495  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:02:44.221511  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:02:44.221530  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:02:44.261993  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:02:44.262028  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:02:44.335242  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:02:44.335299  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:02:44.350005  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:02:44.350042  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:02:44.413941  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:02:44.413965  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:02:44.413982  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	
	
	==> CRI-O <==
	Jan 27 03:02:47 kubernetes-upgrade-080871 crio[2901]: time="2025-01-27 03:02:47.175769917Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737946967175746335,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c6ab903c-a25f-4e36-9960-6786007c1633 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 03:02:47 kubernetes-upgrade-080871 crio[2901]: time="2025-01-27 03:02:47.176390251Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2a30f429-b97a-4660-b5a7-87451e2d1d86 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:02:47 kubernetes-upgrade-080871 crio[2901]: time="2025-01-27 03:02:47.176491189Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2a30f429-b97a-4660-b5a7-87451e2d1d86 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:02:47 kubernetes-upgrade-080871 crio[2901]: time="2025-01-27 03:02:47.176646629Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9c16cf78b301043ab9f5e0c25febad1748fd16c21e670cbd9cffe87ae8de5921,PodSandboxId:9d1f14b65d38f8dcd0ddaaff15676e8cf7f59730eaecc407b3b0fba77ca81333,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737946959434591012,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-080871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f46ef8f80dad9329737c75e0469c032,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 4,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6660c1fa44b3a912c2e92bb4067c889411b2ed96ab03d2b37e7769845406f3b1,PodSandboxId:15ad7fac601e5971ff72fc44bbece6c733420e6aa80f73243c385b77b338170a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737946959393514839,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-080871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d08d94b5f9cf2db80487646ad3a89c84,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6920af64ae49711e7ff4e8c2adb35aecb0e3b13902b39250c439d6c30bad27c2,PodSandboxId:4806f31a3313d6101e1a900613b72f262db00b6b75a818a88e46f53740aceb6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:7,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737946959389002730,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-080871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81ec16af3a9da288f1649a4eb514e1eb,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ff08aaead065b1639350e6b581c64bdd475ef6d4cdc71c874e04a6a155a156,PodSandboxId:5be5a83a87a1709599333f1f89bd9473db7b6661888e019e5544caf1d6ec6dc3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737946959355455425,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-080871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e86c6632d6ca6693ca181cfb2eff39d7,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2a30f429-b97a-4660-b5a7-87451e2d1d86 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:02:47 kubernetes-upgrade-080871 crio[2901]: time="2025-01-27 03:02:47.233741207Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e8a7d3f1-a62c-4ea6-bf9a-666eff306217 name=/runtime.v1.RuntimeService/Version
	Jan 27 03:02:47 kubernetes-upgrade-080871 crio[2901]: time="2025-01-27 03:02:47.233831535Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e8a7d3f1-a62c-4ea6-bf9a-666eff306217 name=/runtime.v1.RuntimeService/Version
	Jan 27 03:02:47 kubernetes-upgrade-080871 crio[2901]: time="2025-01-27 03:02:47.236232989Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0ceb90bf-0138-4404-8275-c7059b669ede name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 03:02:47 kubernetes-upgrade-080871 crio[2901]: time="2025-01-27 03:02:47.236782908Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737946967236743087,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0ceb90bf-0138-4404-8275-c7059b669ede name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 03:02:47 kubernetes-upgrade-080871 crio[2901]: time="2025-01-27 03:02:47.237628659Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0f1030e5-e5eb-41d7-95d3-3efe5a8799cf name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:02:47 kubernetes-upgrade-080871 crio[2901]: time="2025-01-27 03:02:47.237718716Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0f1030e5-e5eb-41d7-95d3-3efe5a8799cf name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:02:47 kubernetes-upgrade-080871 crio[2901]: time="2025-01-27 03:02:47.237914068Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9c16cf78b301043ab9f5e0c25febad1748fd16c21e670cbd9cffe87ae8de5921,PodSandboxId:9d1f14b65d38f8dcd0ddaaff15676e8cf7f59730eaecc407b3b0fba77ca81333,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737946959434591012,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-080871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f46ef8f80dad9329737c75e0469c032,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 4,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6660c1fa44b3a912c2e92bb4067c889411b2ed96ab03d2b37e7769845406f3b1,PodSandboxId:15ad7fac601e5971ff72fc44bbece6c733420e6aa80f73243c385b77b338170a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737946959393514839,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-080871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d08d94b5f9cf2db80487646ad3a89c84,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6920af64ae49711e7ff4e8c2adb35aecb0e3b13902b39250c439d6c30bad27c2,PodSandboxId:4806f31a3313d6101e1a900613b72f262db00b6b75a818a88e46f53740aceb6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:7,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737946959389002730,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-080871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81ec16af3a9da288f1649a4eb514e1eb,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ff08aaead065b1639350e6b581c64bdd475ef6d4cdc71c874e04a6a155a156,PodSandboxId:5be5a83a87a1709599333f1f89bd9473db7b6661888e019e5544caf1d6ec6dc3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737946959355455425,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-080871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e86c6632d6ca6693ca181cfb2eff39d7,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0f1030e5-e5eb-41d7-95d3-3efe5a8799cf name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:02:47 kubernetes-upgrade-080871 crio[2901]: time="2025-01-27 03:02:47.282580976Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7aa9f1b4-49b0-41a1-ad71-29042431d5f5 name=/runtime.v1.RuntimeService/Version
	Jan 27 03:02:47 kubernetes-upgrade-080871 crio[2901]: time="2025-01-27 03:02:47.282726322Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7aa9f1b4-49b0-41a1-ad71-29042431d5f5 name=/runtime.v1.RuntimeService/Version
	Jan 27 03:02:47 kubernetes-upgrade-080871 crio[2901]: time="2025-01-27 03:02:47.284363687Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5fd1971b-e60e-410f-9f28-a3a20e532e54 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 03:02:47 kubernetes-upgrade-080871 crio[2901]: time="2025-01-27 03:02:47.285107558Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737946967285029231,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5fd1971b-e60e-410f-9f28-a3a20e532e54 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 03:02:47 kubernetes-upgrade-080871 crio[2901]: time="2025-01-27 03:02:47.285855639Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4fa6459b-69f3-4be1-a627-f9e709b4ed6c name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:02:47 kubernetes-upgrade-080871 crio[2901]: time="2025-01-27 03:02:47.285921540Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4fa6459b-69f3-4be1-a627-f9e709b4ed6c name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:02:47 kubernetes-upgrade-080871 crio[2901]: time="2025-01-27 03:02:47.286030976Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9c16cf78b301043ab9f5e0c25febad1748fd16c21e670cbd9cffe87ae8de5921,PodSandboxId:9d1f14b65d38f8dcd0ddaaff15676e8cf7f59730eaecc407b3b0fba77ca81333,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737946959434591012,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-080871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f46ef8f80dad9329737c75e0469c032,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 4,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6660c1fa44b3a912c2e92bb4067c889411b2ed96ab03d2b37e7769845406f3b1,PodSandboxId:15ad7fac601e5971ff72fc44bbece6c733420e6aa80f73243c385b77b338170a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737946959393514839,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-080871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d08d94b5f9cf2db80487646ad3a89c84,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6920af64ae49711e7ff4e8c2adb35aecb0e3b13902b39250c439d6c30bad27c2,PodSandboxId:4806f31a3313d6101e1a900613b72f262db00b6b75a818a88e46f53740aceb6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:7,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737946959389002730,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-080871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81ec16af3a9da288f1649a4eb514e1eb,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ff08aaead065b1639350e6b581c64bdd475ef6d4cdc71c874e04a6a155a156,PodSandboxId:5be5a83a87a1709599333f1f89bd9473db7b6661888e019e5544caf1d6ec6dc3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737946959355455425,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-080871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e86c6632d6ca6693ca181cfb2eff39d7,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4fa6459b-69f3-4be1-a627-f9e709b4ed6c name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:02:47 kubernetes-upgrade-080871 crio[2901]: time="2025-01-27 03:02:47.327006968Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4a8f5273-81f0-4670-a999-351d58b9d5c1 name=/runtime.v1.RuntimeService/Version
	Jan 27 03:02:47 kubernetes-upgrade-080871 crio[2901]: time="2025-01-27 03:02:47.327125731Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4a8f5273-81f0-4670-a999-351d58b9d5c1 name=/runtime.v1.RuntimeService/Version
	Jan 27 03:02:47 kubernetes-upgrade-080871 crio[2901]: time="2025-01-27 03:02:47.328338103Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=93b8c0ba-ed03-4726-8afd-f7132f7ed39d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 03:02:47 kubernetes-upgrade-080871 crio[2901]: time="2025-01-27 03:02:47.328700703Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737946967328678388,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=93b8c0ba-ed03-4726-8afd-f7132f7ed39d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 03:02:47 kubernetes-upgrade-080871 crio[2901]: time="2025-01-27 03:02:47.329298174Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dd4ef3d8-a79e-4086-ade5-30089640a22a name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:02:47 kubernetes-upgrade-080871 crio[2901]: time="2025-01-27 03:02:47.329358086Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dd4ef3d8-a79e-4086-ade5-30089640a22a name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:02:47 kubernetes-upgrade-080871 crio[2901]: time="2025-01-27 03:02:47.329459109Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9c16cf78b301043ab9f5e0c25febad1748fd16c21e670cbd9cffe87ae8de5921,PodSandboxId:9d1f14b65d38f8dcd0ddaaff15676e8cf7f59730eaecc407b3b0fba77ca81333,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737946959434591012,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-080871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f46ef8f80dad9329737c75e0469c032,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 4,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6660c1fa44b3a912c2e92bb4067c889411b2ed96ab03d2b37e7769845406f3b1,PodSandboxId:15ad7fac601e5971ff72fc44bbece6c733420e6aa80f73243c385b77b338170a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737946959393514839,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-080871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d08d94b5f9cf2db80487646ad3a89c84,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6920af64ae49711e7ff4e8c2adb35aecb0e3b13902b39250c439d6c30bad27c2,PodSandboxId:4806f31a3313d6101e1a900613b72f262db00b6b75a818a88e46f53740aceb6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:7,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737946959389002730,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-080871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81ec16af3a9da288f1649a4eb514e1eb,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ff08aaead065b1639350e6b581c64bdd475ef6d4cdc71c874e04a6a155a156,PodSandboxId:5be5a83a87a1709599333f1f89bd9473db7b6661888e019e5544caf1d6ec6dc3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737946959355455425,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-080871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e86c6632d6ca6693ca181cfb2eff39d7,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dd4ef3d8-a79e-4086-ade5-30089640a22a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9c16cf78b3010       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1   7 seconds ago       Running             kube-scheduler            4                   9d1f14b65d38f       kube-scheduler-kubernetes-upgrade-080871
	6660c1fa44b3a       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   8 seconds ago       Running             etcd                      1                   15ad7fac601e5       etcd-kubernetes-upgrade-080871
	6920af64ae497       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a   8 seconds ago       Running             kube-apiserver            7                   4806f31a3313d       kube-apiserver-kubernetes-upgrade-080871
	a1ff08aaead06       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35   8 seconds ago       Running             kube-controller-manager   1                   5be5a83a87a17       kube-controller-manager-kubernetes-upgrade-080871
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-080871
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-080871
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6bb462d349d93b9bf1c5a4f87817e5e9ea11cc95
	                    minikube.k8s.io/name=kubernetes-upgrade-080871
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T03_02_45_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 03:02:41 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-080871
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 03:02:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 03:02:42 +0000   Mon, 27 Jan 2025 03:02:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 03:02:42 +0000   Mon, 27 Jan 2025 03:02:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 03:02:42 +0000   Mon, 27 Jan 2025 03:02:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 03:02:42 +0000   Mon, 27 Jan 2025 03:02:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.96
	  Hostname:    kubernetes-upgrade-080871
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 905290286b264248ab9565960298f51d
	  System UUID:                90529028-6b26-4248-ab95-65960298f51d
	  Boot ID:                    f7968a5c-1950-45c1-96f7-f719145555ba
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-080871                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         3s
	  kube-system                 kube-apiserver-kubernetes-upgrade-080871             250m (12%)    0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-080871    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-kubernetes-upgrade-080871             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%)  0 (0%)
	  memory             100Mi (4%)  0 (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age              From     Message
	  ----    ------                   ----             ----     -------
	  Normal  NodeHasSufficientMemory  9s (x8 over 9s)  kubelet  Node kubernetes-upgrade-080871 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x8 over 9s)  kubelet  Node kubernetes-upgrade-080871 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x7 over 9s)  kubelet  Node kubernetes-upgrade-080871 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9s               kubelet  Updated Node Allocatable limit across pods
	  Normal  Starting                 3s               kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  3s               kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3s               kubelet  Node kubernetes-upgrade-080871 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s               kubelet  Node kubernetes-upgrade-080871 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s               kubelet  Node kubernetes-upgrade-080871 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[  +0.059449] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.069104] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.182709] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.173798] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.281490] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +3.895737] systemd-fstab-generator[716]: Ignoring "noauto" option for root device
	[  +1.500942] systemd-fstab-generator[840]: Ignoring "noauto" option for root device
	[  +0.065307] kauditd_printk_skb: 158 callbacks suppressed
	[ +11.067449] systemd-fstab-generator[1248]: Ignoring "noauto" option for root device
	[  +0.099887] kauditd_printk_skb: 69 callbacks suppressed
	[ +13.242212] kauditd_printk_skb: 69 callbacks suppressed
	[  +0.472786] systemd-fstab-generator[2391]: Ignoring "noauto" option for root device
	[  +0.216991] systemd-fstab-generator[2496]: Ignoring "noauto" option for root device
	[  +0.292567] systemd-fstab-generator[2613]: Ignoring "noauto" option for root device
	[  +0.239684] systemd-fstab-generator[2674]: Ignoring "noauto" option for root device
	[  +0.453114] systemd-fstab-generator[2760]: Ignoring "noauto" option for root device
	[Jan27 02:58] systemd-fstab-generator[3050]: Ignoring "noauto" option for root device
	[  +0.086460] kauditd_printk_skb: 188 callbacks suppressed
	[  +6.375986] kauditd_printk_skb: 88 callbacks suppressed
	[  +7.790805] systemd-fstab-generator[3883]: Ignoring "noauto" option for root device
	[ +22.556175] kauditd_printk_skb: 26 callbacks suppressed
	[Jan27 03:02] systemd-fstab-generator[9578]: Ignoring "noauto" option for root device
	[  +6.070434] systemd-fstab-generator[9915]: Ignoring "noauto" option for root device
	[  +0.106245] kauditd_printk_skb: 67 callbacks suppressed
	[  +1.105534] systemd-fstab-generator[9987]: Ignoring "noauto" option for root device
	
	
	==> etcd [6660c1fa44b3a912c2e92bb4067c889411b2ed96ab03d2b37e7769845406f3b1] <==
	{"level":"info","ts":"2025-01-27T03:02:39.812886Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-01-27T03:02:39.815085Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"46ee31ebc3aa8fe","initial-advertise-peer-urls":["https://192.168.50.96:2380"],"listen-peer-urls":["https://192.168.50.96:2380"],"advertise-client-urls":["https://192.168.50.96:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.96:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-01-27T03:02:39.815623Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-01-27T03:02:39.816303Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.50.96:2380"}
	{"level":"info","ts":"2025-01-27T03:02:39.816362Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.50.96:2380"}
	{"level":"info","ts":"2025-01-27T03:02:40.141336Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46ee31ebc3aa8fe is starting a new election at term 1"}
	{"level":"info","ts":"2025-01-27T03:02:40.141432Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46ee31ebc3aa8fe became pre-candidate at term 1"}
	{"level":"info","ts":"2025-01-27T03:02:40.141471Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46ee31ebc3aa8fe received MsgPreVoteResp from 46ee31ebc3aa8fe at term 1"}
	{"level":"info","ts":"2025-01-27T03:02:40.141498Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46ee31ebc3aa8fe became candidate at term 2"}
	{"level":"info","ts":"2025-01-27T03:02:40.141516Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46ee31ebc3aa8fe received MsgVoteResp from 46ee31ebc3aa8fe at term 2"}
	{"level":"info","ts":"2025-01-27T03:02:40.141535Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46ee31ebc3aa8fe became leader at term 2"}
	{"level":"info","ts":"2025-01-27T03:02:40.141553Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 46ee31ebc3aa8fe elected leader 46ee31ebc3aa8fe at term 2"}
	{"level":"info","ts":"2025-01-27T03:02:40.143120Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"46ee31ebc3aa8fe","local-member-attributes":"{Name:kubernetes-upgrade-080871 ClientURLs:[https://192.168.50.96:2379]}","request-path":"/0/members/46ee31ebc3aa8fe/attributes","cluster-id":"fa78aab20fdf43c2","publish-timeout":"7s"}
	{"level":"info","ts":"2025-01-27T03:02:40.144282Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T03:02:40.144681Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T03:02:40.144770Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T03:02:40.148263Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-01-27T03:02:40.148298Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-01-27T03:02:40.148736Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T03:02:40.159540Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T03:02:40.160110Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.96:2379"}
	{"level":"info","ts":"2025-01-27T03:02:40.160275Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa78aab20fdf43c2","local-member-id":"46ee31ebc3aa8fe","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T03:02:40.166266Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T03:02:40.166315Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T03:02:40.175930Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 03:02:47 up 7 min,  0 users,  load average: 1.15, 0.52, 0.25
	Linux kubernetes-upgrade-080871 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [6920af64ae49711e7ff4e8c2adb35aecb0e3b13902b39250c439d6c30bad27c2] <==
	I0127 03:02:41.923574       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0127 03:02:41.923638       1 policy_source.go:240] refreshing policies
	I0127 03:02:41.924363       1 shared_informer.go:320] Caches are synced for configmaps
	I0127 03:02:41.924708       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0127 03:02:41.925467       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0127 03:02:41.934148       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0127 03:02:41.934307       1 aggregator.go:171] initial CRD sync complete...
	I0127 03:02:41.934354       1 autoregister_controller.go:144] Starting autoregister controller
	I0127 03:02:41.934372       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0127 03:02:41.934379       1 cache.go:39] Caches are synced for autoregister controller
	I0127 03:02:41.959645       1 controller.go:615] quota admission added evaluator for: namespaces
	I0127 03:02:41.997723       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0127 03:02:42.801961       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0127 03:02:42.818714       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0127 03:02:42.818754       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0127 03:02:43.420129       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0127 03:02:43.473365       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0127 03:02:43.550687       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0127 03:02:43.563278       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.50.96]
	I0127 03:02:43.564497       1 controller.go:615] quota admission added evaluator for: endpoints
	I0127 03:02:43.570632       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0127 03:02:43.851534       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0127 03:02:44.522326       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0127 03:02:44.541693       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0127 03:02:44.552689       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [a1ff08aaead065b1639350e6b581c64bdd475ef6d4cdc71c874e04a6a155a156] <==
	I0127 03:02:47.201605       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0127 03:02:47.201647       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0127 03:02:47.201693       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0127 03:02:47.201740       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0127 03:02:47.201766       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0127 03:02:47.201808       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0127 03:02:47.201850       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0127 03:02:47.201866       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0127 03:02:47.201895       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0127 03:02:47.201926       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0127 03:02:47.201955       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0127 03:02:47.201978       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0127 03:02:47.202008       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0127 03:02:47.202021       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0127 03:02:47.202046       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0127 03:02:47.202076       1 controllermanager.go:765] "Started controller" controller="resourcequota-controller"
	I0127 03:02:47.202367       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I0127 03:02:47.202434       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0127 03:02:47.202587       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I0127 03:02:47.497094       1 controllermanager.go:765] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0127 03:02:47.497257       1 horizontal.go:201] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0127 03:02:47.497274       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0127 03:02:47.751668       1 controllermanager.go:765] "Started controller" controller="namespace-controller"
	I0127 03:02:47.751746       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I0127 03:02:47.751757       1 shared_informer.go:313] Waiting for caches to sync for namespace
	
	
	==> kube-scheduler [9c16cf78b301043ab9f5e0c25febad1748fd16c21e670cbd9cffe87ae8de5921] <==
	W0127 03:02:41.955883       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0127 03:02:41.955925       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 03:02:42.782934       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0127 03:02:42.783086       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 03:02:42.915288       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0127 03:02:42.915398       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 03:02:42.927159       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0127 03:02:42.927258       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 03:02:42.935896       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0127 03:02:42.935996       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 03:02:42.944266       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0127 03:02:42.944356       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 03:02:42.970971       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0127 03:02:42.971131       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 03:02:43.076235       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0127 03:02:43.077059       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 03:02:43.104131       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0127 03:02:43.104942       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 03:02:43.192220       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0127 03:02:43.192306       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 03:02:43.202106       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0127 03:02:43.202453       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 03:02:43.279790       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0127 03:02:43.279845       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0127 03:02:45.727387       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 03:02:44 kubernetes-upgrade-080871 kubelet[9922]: I0127 03:02:44.679139    9922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/d08d94b5f9cf2db80487646ad3a89c84-etcd-certs\") pod \"etcd-kubernetes-upgrade-080871\" (UID: \"d08d94b5f9cf2db80487646ad3a89c84\") " pod="kube-system/etcd-kubernetes-upgrade-080871"
	Jan 27 03:02:44 kubernetes-upgrade-080871 kubelet[9922]: I0127 03:02:44.679522    9922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/81ec16af3a9da288f1649a4eb514e1eb-ca-certs\") pod \"kube-apiserver-kubernetes-upgrade-080871\" (UID: \"81ec16af3a9da288f1649a4eb514e1eb\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-080871"
	Jan 27 03:02:44 kubernetes-upgrade-080871 kubelet[9922]: I0127 03:02:44.679623    9922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e86c6632d6ca6693ca181cfb2eff39d7-ca-certs\") pod \"kube-controller-manager-kubernetes-upgrade-080871\" (UID: \"e86c6632d6ca6693ca181cfb2eff39d7\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-080871"
	Jan 27 03:02:44 kubernetes-upgrade-080871 kubelet[9922]: I0127 03:02:44.679757    9922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e86c6632d6ca6693ca181cfb2eff39d7-flexvolume-dir\") pod \"kube-controller-manager-kubernetes-upgrade-080871\" (UID: \"e86c6632d6ca6693ca181cfb2eff39d7\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-080871"
	Jan 27 03:02:44 kubernetes-upgrade-080871 kubelet[9922]: I0127 03:02:44.679855    9922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e86c6632d6ca6693ca181cfb2eff39d7-k8s-certs\") pod \"kube-controller-manager-kubernetes-upgrade-080871\" (UID: \"e86c6632d6ca6693ca181cfb2eff39d7\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-080871"
	Jan 27 03:02:44 kubernetes-upgrade-080871 kubelet[9922]: I0127 03:02:44.679968    9922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/d08d94b5f9cf2db80487646ad3a89c84-etcd-data\") pod \"etcd-kubernetes-upgrade-080871\" (UID: \"d08d94b5f9cf2db80487646ad3a89c84\") " pod="kube-system/etcd-kubernetes-upgrade-080871"
	Jan 27 03:02:44 kubernetes-upgrade-080871 kubelet[9922]: I0127 03:02:44.680059    9922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/81ec16af3a9da288f1649a4eb514e1eb-k8s-certs\") pod \"kube-apiserver-kubernetes-upgrade-080871\" (UID: \"81ec16af3a9da288f1649a4eb514e1eb\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-080871"
	Jan 27 03:02:44 kubernetes-upgrade-080871 kubelet[9922]: I0127 03:02:44.680136    9922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/81ec16af3a9da288f1649a4eb514e1eb-usr-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-080871\" (UID: \"81ec16af3a9da288f1649a4eb514e1eb\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-080871"
	Jan 27 03:02:44 kubernetes-upgrade-080871 kubelet[9922]: I0127 03:02:44.680245    9922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e86c6632d6ca6693ca181cfb2eff39d7-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-080871\" (UID: \"e86c6632d6ca6693ca181cfb2eff39d7\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-080871"
	Jan 27 03:02:44 kubernetes-upgrade-080871 kubelet[9922]: I0127 03:02:44.680359    9922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e86c6632d6ca6693ca181cfb2eff39d7-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-080871\" (UID: \"e86c6632d6ca6693ca181cfb2eff39d7\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-080871"
	Jan 27 03:02:44 kubernetes-upgrade-080871 kubelet[9922]: I0127 03:02:44.680480    9922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7f46ef8f80dad9329737c75e0469c032-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-080871\" (UID: \"7f46ef8f80dad9329737c75e0469c032\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-080871"
	Jan 27 03:02:44 kubernetes-upgrade-080871 kubelet[9922]: I0127 03:02:44.702073    9922 kubelet_node_status.go:125] "Node was previously registered" node="kubernetes-upgrade-080871"
	Jan 27 03:02:44 kubernetes-upgrade-080871 kubelet[9922]: I0127 03:02:44.702349    9922 kubelet_node_status.go:79] "Successfully registered node" node="kubernetes-upgrade-080871"
	Jan 27 03:02:45 kubernetes-upgrade-080871 kubelet[9922]: I0127 03:02:45.430639    9922 apiserver.go:52] "Watching apiserver"
	Jan 27 03:02:45 kubernetes-upgrade-080871 kubelet[9922]: I0127 03:02:45.470233    9922 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jan 27 03:02:45 kubernetes-upgrade-080871 kubelet[9922]: I0127 03:02:45.579560    9922 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-kubernetes-upgrade-080871"
	Jan 27 03:02:45 kubernetes-upgrade-080871 kubelet[9922]: I0127 03:02:45.579905    9922 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-kubernetes-upgrade-080871"
	Jan 27 03:02:45 kubernetes-upgrade-080871 kubelet[9922]: I0127 03:02:45.580303    9922 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-kubernetes-upgrade-080871"
	Jan 27 03:02:45 kubernetes-upgrade-080871 kubelet[9922]: E0127 03:02:45.634759    9922 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-kubernetes-upgrade-080871\" already exists" pod="kube-system/etcd-kubernetes-upgrade-080871"
	Jan 27 03:02:45 kubernetes-upgrade-080871 kubelet[9922]: E0127 03:02:45.648219    9922 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-kubernetes-upgrade-080871\" already exists" pod="kube-system/kube-controller-manager-kubernetes-upgrade-080871"
	Jan 27 03:02:45 kubernetes-upgrade-080871 kubelet[9922]: E0127 03:02:45.648543    9922 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-kubernetes-upgrade-080871\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-080871"
	Jan 27 03:02:45 kubernetes-upgrade-080871 kubelet[9922]: I0127 03:02:45.759791    9922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-kubernetes-upgrade-080871" podStartSLOduration=1.7597548189999999 podStartE2EDuration="1.759754819s" podCreationTimestamp="2025-01-27 03:02:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-27 03:02:45.71745306 +0000 UTC m=+1.379623415" watchObservedRunningTime="2025-01-27 03:02:45.759754819 +0000 UTC m=+1.421925170"
	Jan 27 03:02:45 kubernetes-upgrade-080871 kubelet[9922]: I0127 03:02:45.780383    9922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-kubernetes-upgrade-080871" podStartSLOduration=1.7803615609999999 podStartE2EDuration="1.780361561s" podCreationTimestamp="2025-01-27 03:02:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-27 03:02:45.760141299 +0000 UTC m=+1.422311655" watchObservedRunningTime="2025-01-27 03:02:45.780361561 +0000 UTC m=+1.442531895"
	Jan 27 03:02:45 kubernetes-upgrade-080871 kubelet[9922]: I0127 03:02:45.802058    9922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-kubernetes-upgrade-080871" podStartSLOduration=1.802040479 podStartE2EDuration="1.802040479s" podCreationTimestamp="2025-01-27 03:02:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-27 03:02:45.801789571 +0000 UTC m=+1.463959925" watchObservedRunningTime="2025-01-27 03:02:45.802040479 +0000 UTC m=+1.464210833"
	Jan 27 03:02:45 kubernetes-upgrade-080871 kubelet[9922]: I0127 03:02:45.802156    9922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-kubernetes-upgrade-080871" podStartSLOduration=1.802149175 podStartE2EDuration="1.802149175s" podCreationTimestamp="2025-01-27 03:02:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-27 03:02:45.780673263 +0000 UTC m=+1.442843617" watchObservedRunningTime="2025-01-27 03:02:45.802149175 +0000 UTC m=+1.464319531"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-080871 -n kubernetes-upgrade-080871
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-080871 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-080871 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-080871 describe pod storage-provisioner: exit status 1 (81.948619ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-080871 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-080871" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-080871
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-080871: (1.175848339s)
--- FAIL: TestKubernetesUpgrade (722.78s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (60.03s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-622238 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0127 02:49:48.567246  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/functional-308251/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-622238 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (56.2064701s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-622238] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20316
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20316-897624/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-897624/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-622238" primary control-plane node in "pause-622238" cluster
	* Updating the running kvm2 "pause-622238" VM ...
	* Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-622238" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 02:49:45.891185  939626 out.go:345] Setting OutFile to fd 1 ...
	I0127 02:49:45.891428  939626 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:49:45.891463  939626 out.go:358] Setting ErrFile to fd 2...
	I0127 02:49:45.891481  939626 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:49:45.891826  939626 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-897624/.minikube/bin
	I0127 02:49:45.892744  939626 out.go:352] Setting JSON to false
	I0127 02:49:45.894331  939626 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":12729,"bootTime":1737933457,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 02:49:45.894464  939626 start.go:139] virtualization: kvm guest
	I0127 02:49:45.896806  939626 out.go:177] * [pause-622238] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 02:49:45.898313  939626 notify.go:220] Checking for updates...
	I0127 02:49:45.898339  939626 out.go:177]   - MINIKUBE_LOCATION=20316
	I0127 02:49:45.899674  939626 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 02:49:45.900982  939626 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20316-897624/kubeconfig
	I0127 02:49:45.902305  939626 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-897624/.minikube
	I0127 02:49:45.903583  939626 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 02:49:45.904793  939626 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 02:49:45.906544  939626 config.go:182] Loaded profile config "pause-622238": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 02:49:45.907211  939626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 02:49:45.907310  939626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:49:45.930431  939626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37433
	I0127 02:49:45.931127  939626 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:49:45.931787  939626 main.go:141] libmachine: Using API Version  1
	I0127 02:49:45.931809  939626 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:49:45.932209  939626 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:49:45.932459  939626 main.go:141] libmachine: (pause-622238) Calling .DriverName
	I0127 02:49:45.932747  939626 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 02:49:45.933150  939626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 02:49:45.933199  939626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:49:45.949207  939626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36673
	I0127 02:49:45.949725  939626 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:49:45.950422  939626 main.go:141] libmachine: Using API Version  1
	I0127 02:49:45.950460  939626 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:49:45.950780  939626 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:49:45.951006  939626 main.go:141] libmachine: (pause-622238) Calling .DriverName
	I0127 02:49:45.989411  939626 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 02:49:45.990664  939626 start.go:297] selected driver: kvm2
	I0127 02:49:45.990683  939626 start.go:901] validating driver "kvm2" against &{Name:pause-622238 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:pause-622238 Namespace:def
ault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.58 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-polic
y:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 02:49:45.990883  939626 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 02:49:45.991364  939626 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 02:49:45.991461  939626 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20316-897624/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 02:49:46.013070  939626 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 02:49:46.014195  939626 cni.go:84] Creating CNI manager for ""
	I0127 02:49:46.014269  939626 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 02:49:46.014352  939626 start.go:340] cluster config:
	{Name:pause-622238 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:pause-622238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.58 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:f
alse storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 02:49:46.014540  939626 iso.go:125] acquiring lock: {Name:mkbbe08fa3daa2372045069667a0a52e0e34abd9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 02:49:46.016363  939626 out.go:177] * Starting "pause-622238" primary control-plane node in "pause-622238" cluster
	I0127 02:49:46.017362  939626 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 02:49:46.017424  939626 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 02:49:46.017441  939626 cache.go:56] Caching tarball of preloaded images
	I0127 02:49:46.017611  939626 preload.go:172] Found /home/jenkins/minikube-integration/20316-897624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 02:49:46.017626  939626 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 02:49:46.017807  939626 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/pause-622238/config.json ...
	I0127 02:49:46.018085  939626 start.go:360] acquireMachinesLock for pause-622238: {Name:mke71211974c740845fb9b00041df14f4e3ecd74 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 02:50:06.783101  939626 start.go:364] duration metric: took 20.764984439s to acquireMachinesLock for "pause-622238"
	I0127 02:50:06.783163  939626 start.go:96] Skipping create...Using existing machine configuration
	I0127 02:50:06.783170  939626 fix.go:54] fixHost starting: 
	I0127 02:50:06.783589  939626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 02:50:06.783650  939626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:50:06.802368  939626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43515
	I0127 02:50:06.802950  939626 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:50:06.803422  939626 main.go:141] libmachine: Using API Version  1
	I0127 02:50:06.803446  939626 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:50:06.803718  939626 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:50:06.803843  939626 main.go:141] libmachine: (pause-622238) Calling .DriverName
	I0127 02:50:06.804039  939626 main.go:141] libmachine: (pause-622238) Calling .GetState
	I0127 02:50:06.806064  939626 fix.go:112] recreateIfNeeded on pause-622238: state=Running err=<nil>
	W0127 02:50:06.806091  939626 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 02:50:06.807842  939626 out.go:177] * Updating the running kvm2 "pause-622238" VM ...
	I0127 02:50:06.809175  939626 machine.go:93] provisionDockerMachine start ...
	I0127 02:50:06.809208  939626 main.go:141] libmachine: (pause-622238) Calling .DriverName
	I0127 02:50:06.809419  939626 main.go:141] libmachine: (pause-622238) Calling .GetSSHHostname
	I0127 02:50:06.812690  939626 main.go:141] libmachine: (pause-622238) DBG | domain pause-622238 has defined MAC address 52:54:00:d0:19:d7 in network mk-pause-622238
	I0127 02:50:06.813278  939626 main.go:141] libmachine: (pause-622238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:19:d7", ip: ""} in network mk-pause-622238: {Iface:virbr2 ExpiryTime:2025-01-27 03:48:39 +0000 UTC Type:0 Mac:52:54:00:d0:19:d7 Iaid: IPaddr:192.168.50.58 Prefix:24 Hostname:pause-622238 Clientid:01:52:54:00:d0:19:d7}
	I0127 02:50:06.813295  939626 main.go:141] libmachine: (pause-622238) DBG | domain pause-622238 has defined IP address 192.168.50.58 and MAC address 52:54:00:d0:19:d7 in network mk-pause-622238
	I0127 02:50:06.813603  939626 main.go:141] libmachine: (pause-622238) Calling .GetSSHPort
	I0127 02:50:06.813786  939626 main.go:141] libmachine: (pause-622238) Calling .GetSSHKeyPath
	I0127 02:50:06.813962  939626 main.go:141] libmachine: (pause-622238) Calling .GetSSHKeyPath
	I0127 02:50:06.814121  939626 main.go:141] libmachine: (pause-622238) Calling .GetSSHUsername
	I0127 02:50:06.815867  939626 main.go:141] libmachine: Using SSH client type: native
	I0127 02:50:06.816075  939626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.58 22 <nil> <nil>}
	I0127 02:50:06.816084  939626 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 02:50:06.947150  939626 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-622238
	
	I0127 02:50:06.947186  939626 main.go:141] libmachine: (pause-622238) Calling .GetMachineName
	I0127 02:50:06.947475  939626 buildroot.go:166] provisioning hostname "pause-622238"
	I0127 02:50:06.947519  939626 main.go:141] libmachine: (pause-622238) Calling .GetMachineName
	I0127 02:50:06.947734  939626 main.go:141] libmachine: (pause-622238) Calling .GetSSHHostname
	I0127 02:50:06.951238  939626 main.go:141] libmachine: (pause-622238) DBG | domain pause-622238 has defined MAC address 52:54:00:d0:19:d7 in network mk-pause-622238
	I0127 02:50:06.951692  939626 main.go:141] libmachine: (pause-622238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:19:d7", ip: ""} in network mk-pause-622238: {Iface:virbr2 ExpiryTime:2025-01-27 03:48:39 +0000 UTC Type:0 Mac:52:54:00:d0:19:d7 Iaid: IPaddr:192.168.50.58 Prefix:24 Hostname:pause-622238 Clientid:01:52:54:00:d0:19:d7}
	I0127 02:50:06.951729  939626 main.go:141] libmachine: (pause-622238) DBG | domain pause-622238 has defined IP address 192.168.50.58 and MAC address 52:54:00:d0:19:d7 in network mk-pause-622238
	I0127 02:50:06.952048  939626 main.go:141] libmachine: (pause-622238) Calling .GetSSHPort
	I0127 02:50:06.952273  939626 main.go:141] libmachine: (pause-622238) Calling .GetSSHKeyPath
	I0127 02:50:06.952448  939626 main.go:141] libmachine: (pause-622238) Calling .GetSSHKeyPath
	I0127 02:50:06.952568  939626 main.go:141] libmachine: (pause-622238) Calling .GetSSHUsername
	I0127 02:50:06.952754  939626 main.go:141] libmachine: Using SSH client type: native
	I0127 02:50:06.953030  939626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.58 22 <nil> <nil>}
	I0127 02:50:06.953072  939626 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-622238 && echo "pause-622238" | sudo tee /etc/hostname
	I0127 02:50:07.089688  939626 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-622238
	
	I0127 02:50:07.089723  939626 main.go:141] libmachine: (pause-622238) Calling .GetSSHHostname
	I0127 02:50:07.093210  939626 main.go:141] libmachine: (pause-622238) DBG | domain pause-622238 has defined MAC address 52:54:00:d0:19:d7 in network mk-pause-622238
	I0127 02:50:07.093624  939626 main.go:141] libmachine: (pause-622238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:19:d7", ip: ""} in network mk-pause-622238: {Iface:virbr2 ExpiryTime:2025-01-27 03:48:39 +0000 UTC Type:0 Mac:52:54:00:d0:19:d7 Iaid: IPaddr:192.168.50.58 Prefix:24 Hostname:pause-622238 Clientid:01:52:54:00:d0:19:d7}
	I0127 02:50:07.093657  939626 main.go:141] libmachine: (pause-622238) DBG | domain pause-622238 has defined IP address 192.168.50.58 and MAC address 52:54:00:d0:19:d7 in network mk-pause-622238
	I0127 02:50:07.093911  939626 main.go:141] libmachine: (pause-622238) Calling .GetSSHPort
	I0127 02:50:07.094095  939626 main.go:141] libmachine: (pause-622238) Calling .GetSSHKeyPath
	I0127 02:50:07.094283  939626 main.go:141] libmachine: (pause-622238) Calling .GetSSHKeyPath
	I0127 02:50:07.094453  939626 main.go:141] libmachine: (pause-622238) Calling .GetSSHUsername
	I0127 02:50:07.094671  939626 main.go:141] libmachine: Using SSH client type: native
	I0127 02:50:07.094940  939626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.58 22 <nil> <nil>}
	I0127 02:50:07.094967  939626 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-622238' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-622238/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-622238' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 02:50:07.221582  939626 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 02:50:07.221634  939626 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20316-897624/.minikube CaCertPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20316-897624/.minikube}
	I0127 02:50:07.221676  939626 buildroot.go:174] setting up certificates
	I0127 02:50:07.221689  939626 provision.go:84] configureAuth start
	I0127 02:50:07.221705  939626 main.go:141] libmachine: (pause-622238) Calling .GetMachineName
	I0127 02:50:07.222024  939626 main.go:141] libmachine: (pause-622238) Calling .GetIP
	I0127 02:50:07.224854  939626 main.go:141] libmachine: (pause-622238) DBG | domain pause-622238 has defined MAC address 52:54:00:d0:19:d7 in network mk-pause-622238
	I0127 02:50:07.225268  939626 main.go:141] libmachine: (pause-622238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:19:d7", ip: ""} in network mk-pause-622238: {Iface:virbr2 ExpiryTime:2025-01-27 03:48:39 +0000 UTC Type:0 Mac:52:54:00:d0:19:d7 Iaid: IPaddr:192.168.50.58 Prefix:24 Hostname:pause-622238 Clientid:01:52:54:00:d0:19:d7}
	I0127 02:50:07.225312  939626 main.go:141] libmachine: (pause-622238) DBG | domain pause-622238 has defined IP address 192.168.50.58 and MAC address 52:54:00:d0:19:d7 in network mk-pause-622238
	I0127 02:50:07.225461  939626 main.go:141] libmachine: (pause-622238) Calling .GetSSHHostname
	I0127 02:50:07.228130  939626 main.go:141] libmachine: (pause-622238) DBG | domain pause-622238 has defined MAC address 52:54:00:d0:19:d7 in network mk-pause-622238
	I0127 02:50:07.228529  939626 main.go:141] libmachine: (pause-622238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:19:d7", ip: ""} in network mk-pause-622238: {Iface:virbr2 ExpiryTime:2025-01-27 03:48:39 +0000 UTC Type:0 Mac:52:54:00:d0:19:d7 Iaid: IPaddr:192.168.50.58 Prefix:24 Hostname:pause-622238 Clientid:01:52:54:00:d0:19:d7}
	I0127 02:50:07.228562  939626 main.go:141] libmachine: (pause-622238) DBG | domain pause-622238 has defined IP address 192.168.50.58 and MAC address 52:54:00:d0:19:d7 in network mk-pause-622238
	I0127 02:50:07.228712  939626 provision.go:143] copyHostCerts
	I0127 02:50:07.228785  939626 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-897624/.minikube/ca.pem, removing ...
	I0127 02:50:07.228811  939626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-897624/.minikube/ca.pem
	I0127 02:50:07.228899  939626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20316-897624/.minikube/ca.pem (1078 bytes)
	I0127 02:50:07.229111  939626 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-897624/.minikube/cert.pem, removing ...
	I0127 02:50:07.229129  939626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-897624/.minikube/cert.pem
	I0127 02:50:07.229164  939626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20316-897624/.minikube/cert.pem (1123 bytes)
	I0127 02:50:07.229262  939626 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-897624/.minikube/key.pem, removing ...
	I0127 02:50:07.229281  939626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-897624/.minikube/key.pem
	I0127 02:50:07.229317  939626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20316-897624/.minikube/key.pem (1679 bytes)
	I0127 02:50:07.229396  939626 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca-key.pem org=jenkins.pause-622238 san=[127.0.0.1 192.168.50.58 localhost minikube pause-622238]
	I0127 02:50:07.283790  939626 provision.go:177] copyRemoteCerts
	I0127 02:50:07.283871  939626 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 02:50:07.283906  939626 main.go:141] libmachine: (pause-622238) Calling .GetSSHHostname
	I0127 02:50:07.287374  939626 main.go:141] libmachine: (pause-622238) DBG | domain pause-622238 has defined MAC address 52:54:00:d0:19:d7 in network mk-pause-622238
	I0127 02:50:07.287858  939626 main.go:141] libmachine: (pause-622238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:19:d7", ip: ""} in network mk-pause-622238: {Iface:virbr2 ExpiryTime:2025-01-27 03:48:39 +0000 UTC Type:0 Mac:52:54:00:d0:19:d7 Iaid: IPaddr:192.168.50.58 Prefix:24 Hostname:pause-622238 Clientid:01:52:54:00:d0:19:d7}
	I0127 02:50:07.287893  939626 main.go:141] libmachine: (pause-622238) DBG | domain pause-622238 has defined IP address 192.168.50.58 and MAC address 52:54:00:d0:19:d7 in network mk-pause-622238
	I0127 02:50:07.288113  939626 main.go:141] libmachine: (pause-622238) Calling .GetSSHPort
	I0127 02:50:07.288317  939626 main.go:141] libmachine: (pause-622238) Calling .GetSSHKeyPath
	I0127 02:50:07.288480  939626 main.go:141] libmachine: (pause-622238) Calling .GetSSHUsername
	I0127 02:50:07.288616  939626 sshutil.go:53] new ssh client: &{IP:192.168.50.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/pause-622238/id_rsa Username:docker}
	I0127 02:50:07.396390  939626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 02:50:07.431187  939626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 02:50:07.464301  939626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0127 02:50:07.491623  939626 provision.go:87] duration metric: took 269.911714ms to configureAuth
	I0127 02:50:07.491661  939626 buildroot.go:189] setting minikube options for container-runtime
	I0127 02:50:07.491962  939626 config.go:182] Loaded profile config "pause-622238": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 02:50:07.492084  939626 main.go:141] libmachine: (pause-622238) Calling .GetSSHHostname
	I0127 02:50:07.495260  939626 main.go:141] libmachine: (pause-622238) DBG | domain pause-622238 has defined MAC address 52:54:00:d0:19:d7 in network mk-pause-622238
	I0127 02:50:07.495694  939626 main.go:141] libmachine: (pause-622238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:19:d7", ip: ""} in network mk-pause-622238: {Iface:virbr2 ExpiryTime:2025-01-27 03:48:39 +0000 UTC Type:0 Mac:52:54:00:d0:19:d7 Iaid: IPaddr:192.168.50.58 Prefix:24 Hostname:pause-622238 Clientid:01:52:54:00:d0:19:d7}
	I0127 02:50:07.495724  939626 main.go:141] libmachine: (pause-622238) DBG | domain pause-622238 has defined IP address 192.168.50.58 and MAC address 52:54:00:d0:19:d7 in network mk-pause-622238
	I0127 02:50:07.496064  939626 main.go:141] libmachine: (pause-622238) Calling .GetSSHPort
	I0127 02:50:07.496359  939626 main.go:141] libmachine: (pause-622238) Calling .GetSSHKeyPath
	I0127 02:50:07.496583  939626 main.go:141] libmachine: (pause-622238) Calling .GetSSHKeyPath
	I0127 02:50:07.496773  939626 main.go:141] libmachine: (pause-622238) Calling .GetSSHUsername
	I0127 02:50:07.496985  939626 main.go:141] libmachine: Using SSH client type: native
	I0127 02:50:07.497226  939626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.58 22 <nil> <nil>}
	I0127 02:50:07.497258  939626 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 02:50:13.841969  939626 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 02:50:13.841998  939626 machine.go:96] duration metric: took 7.032800657s to provisionDockerMachine
	I0127 02:50:13.842014  939626 start.go:293] postStartSetup for "pause-622238" (driver="kvm2")
	I0127 02:50:13.842028  939626 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 02:50:13.842072  939626 main.go:141] libmachine: (pause-622238) Calling .DriverName
	I0127 02:50:13.842413  939626 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 02:50:13.842448  939626 main.go:141] libmachine: (pause-622238) Calling .GetSSHHostname
	I0127 02:50:13.845543  939626 main.go:141] libmachine: (pause-622238) DBG | domain pause-622238 has defined MAC address 52:54:00:d0:19:d7 in network mk-pause-622238
	I0127 02:50:13.845975  939626 main.go:141] libmachine: (pause-622238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:19:d7", ip: ""} in network mk-pause-622238: {Iface:virbr2 ExpiryTime:2025-01-27 03:48:39 +0000 UTC Type:0 Mac:52:54:00:d0:19:d7 Iaid: IPaddr:192.168.50.58 Prefix:24 Hostname:pause-622238 Clientid:01:52:54:00:d0:19:d7}
	I0127 02:50:13.846023  939626 main.go:141] libmachine: (pause-622238) DBG | domain pause-622238 has defined IP address 192.168.50.58 and MAC address 52:54:00:d0:19:d7 in network mk-pause-622238
	I0127 02:50:13.846133  939626 main.go:141] libmachine: (pause-622238) Calling .GetSSHPort
	I0127 02:50:13.846325  939626 main.go:141] libmachine: (pause-622238) Calling .GetSSHKeyPath
	I0127 02:50:13.846526  939626 main.go:141] libmachine: (pause-622238) Calling .GetSSHUsername
	I0127 02:50:13.846674  939626 sshutil.go:53] new ssh client: &{IP:192.168.50.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/pause-622238/id_rsa Username:docker}
	I0127 02:50:13.931469  939626 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 02:50:13.936206  939626 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 02:50:13.936240  939626 filesync.go:126] Scanning /home/jenkins/minikube-integration/20316-897624/.minikube/addons for local assets ...
	I0127 02:50:13.936306  939626 filesync.go:126] Scanning /home/jenkins/minikube-integration/20316-897624/.minikube/files for local assets ...
	I0127 02:50:13.936411  939626 filesync.go:149] local asset: /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem -> 9048892.pem in /etc/ssl/certs
	I0127 02:50:13.936538  939626 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 02:50:13.948572  939626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem --> /etc/ssl/certs/9048892.pem (1708 bytes)
	I0127 02:50:13.979504  939626 start.go:296] duration metric: took 137.474405ms for postStartSetup
	I0127 02:50:13.979558  939626 fix.go:56] duration metric: took 7.196388023s for fixHost
	I0127 02:50:13.979587  939626 main.go:141] libmachine: (pause-622238) Calling .GetSSHHostname
	I0127 02:50:13.982447  939626 main.go:141] libmachine: (pause-622238) DBG | domain pause-622238 has defined MAC address 52:54:00:d0:19:d7 in network mk-pause-622238
	I0127 02:50:13.982765  939626 main.go:141] libmachine: (pause-622238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:19:d7", ip: ""} in network mk-pause-622238: {Iface:virbr2 ExpiryTime:2025-01-27 03:48:39 +0000 UTC Type:0 Mac:52:54:00:d0:19:d7 Iaid: IPaddr:192.168.50.58 Prefix:24 Hostname:pause-622238 Clientid:01:52:54:00:d0:19:d7}
	I0127 02:50:13.982813  939626 main.go:141] libmachine: (pause-622238) DBG | domain pause-622238 has defined IP address 192.168.50.58 and MAC address 52:54:00:d0:19:d7 in network mk-pause-622238
	I0127 02:50:13.983009  939626 main.go:141] libmachine: (pause-622238) Calling .GetSSHPort
	I0127 02:50:13.983191  939626 main.go:141] libmachine: (pause-622238) Calling .GetSSHKeyPath
	I0127 02:50:13.983344  939626 main.go:141] libmachine: (pause-622238) Calling .GetSSHKeyPath
	I0127 02:50:13.983516  939626 main.go:141] libmachine: (pause-622238) Calling .GetSSHUsername
	I0127 02:50:13.983690  939626 main.go:141] libmachine: Using SSH client type: native
	I0127 02:50:13.983943  939626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.58 22 <nil> <nil>}
	I0127 02:50:13.983959  939626 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 02:50:14.099011  939626 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737946214.085341452
	
	I0127 02:50:14.099042  939626 fix.go:216] guest clock: 1737946214.085341452
	I0127 02:50:14.099053  939626 fix.go:229] Guest: 2025-01-27 02:50:14.085341452 +0000 UTC Remote: 2025-01-27 02:50:13.979563154 +0000 UTC m=+28.145987046 (delta=105.778298ms)
	I0127 02:50:14.099093  939626 fix.go:200] guest clock delta is within tolerance: 105.778298ms
	I0127 02:50:14.099099  939626 start.go:83] releasing machines lock for "pause-622238", held for 7.315956582s
	I0127 02:50:14.099129  939626 main.go:141] libmachine: (pause-622238) Calling .DriverName
	I0127 02:50:14.099413  939626 main.go:141] libmachine: (pause-622238) Calling .GetIP
	I0127 02:50:14.103333  939626 main.go:141] libmachine: (pause-622238) DBG | domain pause-622238 has defined MAC address 52:54:00:d0:19:d7 in network mk-pause-622238
	I0127 02:50:14.103830  939626 main.go:141] libmachine: (pause-622238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:19:d7", ip: ""} in network mk-pause-622238: {Iface:virbr2 ExpiryTime:2025-01-27 03:48:39 +0000 UTC Type:0 Mac:52:54:00:d0:19:d7 Iaid: IPaddr:192.168.50.58 Prefix:24 Hostname:pause-622238 Clientid:01:52:54:00:d0:19:d7}
	I0127 02:50:14.103887  939626 main.go:141] libmachine: (pause-622238) DBG | domain pause-622238 has defined IP address 192.168.50.58 and MAC address 52:54:00:d0:19:d7 in network mk-pause-622238
	I0127 02:50:14.103989  939626 main.go:141] libmachine: (pause-622238) Calling .DriverName
	I0127 02:50:14.104713  939626 main.go:141] libmachine: (pause-622238) Calling .DriverName
	I0127 02:50:14.104945  939626 main.go:141] libmachine: (pause-622238) Calling .DriverName
	I0127 02:50:14.105123  939626 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 02:50:14.105178  939626 ssh_runner.go:195] Run: cat /version.json
	I0127 02:50:14.105201  939626 main.go:141] libmachine: (pause-622238) Calling .GetSSHHostname
	I0127 02:50:14.105209  939626 main.go:141] libmachine: (pause-622238) Calling .GetSSHHostname
	I0127 02:50:14.108446  939626 main.go:141] libmachine: (pause-622238) DBG | domain pause-622238 has defined MAC address 52:54:00:d0:19:d7 in network mk-pause-622238
	I0127 02:50:14.108957  939626 main.go:141] libmachine: (pause-622238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:19:d7", ip: ""} in network mk-pause-622238: {Iface:virbr2 ExpiryTime:2025-01-27 03:48:39 +0000 UTC Type:0 Mac:52:54:00:d0:19:d7 Iaid: IPaddr:192.168.50.58 Prefix:24 Hostname:pause-622238 Clientid:01:52:54:00:d0:19:d7}
	I0127 02:50:14.108991  939626 main.go:141] libmachine: (pause-622238) DBG | domain pause-622238 has defined IP address 192.168.50.58 and MAC address 52:54:00:d0:19:d7 in network mk-pause-622238
	I0127 02:50:14.109114  939626 main.go:141] libmachine: (pause-622238) DBG | domain pause-622238 has defined MAC address 52:54:00:d0:19:d7 in network mk-pause-622238
	I0127 02:50:14.109337  939626 main.go:141] libmachine: (pause-622238) Calling .GetSSHPort
	I0127 02:50:14.109541  939626 main.go:141] libmachine: (pause-622238) Calling .GetSSHKeyPath
	I0127 02:50:14.109654  939626 main.go:141] libmachine: (pause-622238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:19:d7", ip: ""} in network mk-pause-622238: {Iface:virbr2 ExpiryTime:2025-01-27 03:48:39 +0000 UTC Type:0 Mac:52:54:00:d0:19:d7 Iaid: IPaddr:192.168.50.58 Prefix:24 Hostname:pause-622238 Clientid:01:52:54:00:d0:19:d7}
	I0127 02:50:14.109710  939626 main.go:141] libmachine: (pause-622238) DBG | domain pause-622238 has defined IP address 192.168.50.58 and MAC address 52:54:00:d0:19:d7 in network mk-pause-622238
	I0127 02:50:14.109663  939626 main.go:141] libmachine: (pause-622238) Calling .GetSSHUsername
	I0127 02:50:14.109937  939626 main.go:141] libmachine: (pause-622238) Calling .GetSSHPort
	I0127 02:50:14.109980  939626 sshutil.go:53] new ssh client: &{IP:192.168.50.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/pause-622238/id_rsa Username:docker}
	I0127 02:50:14.110070  939626 main.go:141] libmachine: (pause-622238) Calling .GetSSHKeyPath
	I0127 02:50:14.110164  939626 main.go:141] libmachine: (pause-622238) Calling .GetSSHUsername
	I0127 02:50:14.110269  939626 sshutil.go:53] new ssh client: &{IP:192.168.50.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/pause-622238/id_rsa Username:docker}
	I0127 02:50:14.197700  939626 ssh_runner.go:195] Run: systemctl --version
	I0127 02:50:14.228217  939626 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 02:50:14.395074  939626 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 02:50:14.400795  939626 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 02:50:14.400867  939626 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 02:50:14.410045  939626 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0127 02:50:14.410072  939626 start.go:495] detecting cgroup driver to use...
	I0127 02:50:14.410146  939626 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 02:50:14.428441  939626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 02:50:14.442742  939626 docker.go:217] disabling cri-docker service (if available) ...
	I0127 02:50:14.442816  939626 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 02:50:14.458202  939626 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 02:50:14.472989  939626 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 02:50:14.639477  939626 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 02:50:14.811264  939626 docker.go:233] disabling docker service ...
	I0127 02:50:14.811413  939626 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 02:50:14.836132  939626 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 02:50:14.851723  939626 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 02:50:15.042426  939626 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 02:50:15.400793  939626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 02:50:15.473363  939626 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 02:50:15.675015  939626 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 02:50:15.675097  939626 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 02:50:15.738962  939626 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 02:50:15.739102  939626 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 02:50:15.783278  939626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 02:50:15.846232  939626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 02:50:15.887406  939626 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 02:50:15.919036  939626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 02:50:15.948294  939626 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 02:50:15.998309  939626 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 02:50:16.101218  939626 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 02:50:16.142577  939626 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 02:50:16.213472  939626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 02:50:16.562404  939626 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 02:50:17.185407  939626 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 02:50:17.185590  939626 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 02:50:17.201587  939626 start.go:563] Will wait 60s for crictl version
	I0127 02:50:17.201690  939626 ssh_runner.go:195] Run: which crictl
	I0127 02:50:17.250696  939626 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 02:50:17.496708  939626 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 02:50:17.496819  939626 ssh_runner.go:195] Run: crio --version
	I0127 02:50:17.755050  939626 ssh_runner.go:195] Run: crio --version
	I0127 02:50:17.823846  939626 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 02:50:17.824860  939626 main.go:141] libmachine: (pause-622238) Calling .GetIP
	I0127 02:50:17.828574  939626 main.go:141] libmachine: (pause-622238) DBG | domain pause-622238 has defined MAC address 52:54:00:d0:19:d7 in network mk-pause-622238
	I0127 02:50:17.833343  939626 main.go:141] libmachine: (pause-622238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:19:d7", ip: ""} in network mk-pause-622238: {Iface:virbr2 ExpiryTime:2025-01-27 03:48:39 +0000 UTC Type:0 Mac:52:54:00:d0:19:d7 Iaid: IPaddr:192.168.50.58 Prefix:24 Hostname:pause-622238 Clientid:01:52:54:00:d0:19:d7}
	I0127 02:50:17.833374  939626 main.go:141] libmachine: (pause-622238) DBG | domain pause-622238 has defined IP address 192.168.50.58 and MAC address 52:54:00:d0:19:d7 in network mk-pause-622238
	I0127 02:50:17.833866  939626 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0127 02:50:17.839247  939626 kubeadm.go:883] updating cluster {Name:pause-622238 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:pause-622238 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.58 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portain
er:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 02:50:17.839436  939626 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 02:50:17.839508  939626 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 02:50:17.932030  939626 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 02:50:17.932062  939626 crio.go:433] Images already preloaded, skipping extraction
	I0127 02:50:17.932160  939626 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 02:50:17.977245  939626 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 02:50:17.977276  939626 cache_images.go:84] Images are preloaded, skipping loading
	I0127 02:50:17.977286  939626 kubeadm.go:934] updating node { 192.168.50.58 8443 v1.32.1 crio true true} ...
	I0127 02:50:17.977412  939626 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-622238 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:pause-622238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 02:50:17.977492  939626 ssh_runner.go:195] Run: crio config
	I0127 02:50:18.077450  939626 cni.go:84] Creating CNI manager for ""
	I0127 02:50:18.077478  939626 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 02:50:18.077500  939626 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 02:50:18.077533  939626 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.58 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-622238 NodeName:pause-622238 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.58"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.58 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 02:50:18.077742  939626 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.58
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-622238"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.58"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.58"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 02:50:18.077920  939626 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 02:50:18.099542  939626 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 02:50:18.099625  939626 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 02:50:18.114832  939626 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0127 02:50:18.138720  939626 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 02:50:18.160253  939626 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I0127 02:50:18.181521  939626 ssh_runner.go:195] Run: grep 192.168.50.58	control-plane.minikube.internal$ /etc/hosts
	I0127 02:50:18.185948  939626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 02:50:18.393298  939626 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 02:50:18.412211  939626 certs.go:68] Setting up /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/pause-622238 for IP: 192.168.50.58
	I0127 02:50:18.412236  939626 certs.go:194] generating shared ca certs ...
	I0127 02:50:18.412258  939626 certs.go:226] acquiring lock for ca certs: {Name:mk2a0a86c4d196cb3cdcd1c836209954479044e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:50:18.412440  939626 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20316-897624/.minikube/ca.key
	I0127 02:50:18.412517  939626 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20316-897624/.minikube/proxy-client-ca.key
	I0127 02:50:18.412532  939626 certs.go:256] generating profile certs ...
	I0127 02:50:18.412645  939626 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/pause-622238/client.key
	I0127 02:50:18.412749  939626 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/pause-622238/apiserver.key.73ce4eec
	I0127 02:50:18.412811  939626 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/pause-622238/proxy-client.key
	I0127 02:50:18.413022  939626 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/904889.pem (1338 bytes)
	W0127 02:50:18.413084  939626 certs.go:480] ignoring /home/jenkins/minikube-integration/20316-897624/.minikube/certs/904889_empty.pem, impossibly tiny 0 bytes
	I0127 02:50:18.413100  939626 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 02:50:18.413140  939626 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem (1078 bytes)
	I0127 02:50:18.413173  939626 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/cert.pem (1123 bytes)
	I0127 02:50:18.413210  939626 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/key.pem (1679 bytes)
	I0127 02:50:18.413277  939626 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem (1708 bytes)
	I0127 02:50:18.414135  939626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 02:50:18.448069  939626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 02:50:18.484549  939626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 02:50:18.520382  939626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 02:50:18.552906  939626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/pause-622238/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0127 02:50:18.585034  939626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/pause-622238/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 02:50:18.614293  939626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/pause-622238/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 02:50:18.642234  939626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/pause-622238/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 02:50:18.669228  939626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem --> /usr/share/ca-certificates/9048892.pem (1708 bytes)
	I0127 02:50:18.697560  939626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 02:50:18.726739  939626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/certs/904889.pem --> /usr/share/ca-certificates/904889.pem (1338 bytes)
	I0127 02:50:18.758640  939626 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 02:50:18.778144  939626 ssh_runner.go:195] Run: openssl version
	I0127 02:50:18.784636  939626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9048892.pem && ln -fs /usr/share/ca-certificates/9048892.pem /etc/ssl/certs/9048892.pem"
	I0127 02:50:18.798372  939626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9048892.pem
	I0127 02:50:18.803307  939626 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 01:57 /usr/share/ca-certificates/9048892.pem
	I0127 02:50:18.803392  939626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9048892.pem
	I0127 02:50:18.809503  939626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9048892.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 02:50:18.820169  939626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 02:50:18.831991  939626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 02:50:18.836980  939626 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 01:48 /usr/share/ca-certificates/minikubeCA.pem
	I0127 02:50:18.837058  939626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 02:50:18.843246  939626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 02:50:18.856485  939626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/904889.pem && ln -fs /usr/share/ca-certificates/904889.pem /etc/ssl/certs/904889.pem"
	I0127 02:50:18.870199  939626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/904889.pem
	I0127 02:50:18.875283  939626 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 01:57 /usr/share/ca-certificates/904889.pem
	I0127 02:50:18.875359  939626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/904889.pem
	I0127 02:50:18.882866  939626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/904889.pem /etc/ssl/certs/51391683.0"
	I0127 02:50:18.894997  939626 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 02:50:18.900168  939626 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 02:50:18.906526  939626 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 02:50:18.912903  939626 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 02:50:18.919332  939626 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 02:50:18.926439  939626 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 02:50:18.932512  939626 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 02:50:18.938277  939626 kubeadm.go:392] StartCluster: {Name:pause-622238 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:pause-622238 Namespace:default APIServerHAVI
P: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.58 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:
false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 02:50:18.938436  939626 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 02:50:18.938487  939626 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 02:50:18.982483  939626 cri.go:89] found id: "723b4e88e62109878bd5c6cf65e85c07583b1be9a253ed0436768f103dcf7b56"
	I0127 02:50:18.982514  939626 cri.go:89] found id: "bcab89d41125a8414b58de1d77550e1159a49e4d71ff4369d142f7e3501b9229"
	I0127 02:50:18.982520  939626 cri.go:89] found id: "825a3f8f6a885295c9c9e9439f12deaad49e21c747b3809d5cff027c4b9e1819"
	I0127 02:50:18.982525  939626 cri.go:89] found id: "d62e04f6f5e13451192cf7b3275a99d64a2c9696cf576a5ee12858cfa3dc94c3"
	I0127 02:50:18.982530  939626 cri.go:89] found id: "90e530f8899a063641c0232d3a8d81ba60eab63da53061aa8f6e54d35a989686"
	I0127 02:50:18.982535  939626 cri.go:89] found id: "fe790ab890a4916fdd3f58edb0503dd303a6e1cf28eb9c753b63fc4ecbb7ddeb"
	I0127 02:50:18.982539  939626 cri.go:89] found id: ""
	I0127 02:50:18.982593  939626 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-622238 -n pause-622238
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-622238 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-622238 logs -n 25: (1.219364944s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p scheduled-stop-739127       | scheduled-stop-739127  | jenkins | v1.35.0 | 27 Jan 25 02:46 UTC |                     |
	|         | --schedule 5m                  |                        |         |         |                     |                     |
	| stop    | -p scheduled-stop-739127       | scheduled-stop-739127  | jenkins | v1.35.0 | 27 Jan 25 02:46 UTC |                     |
	|         | --schedule 5m                  |                        |         |         |                     |                     |
	| stop    | -p scheduled-stop-739127       | scheduled-stop-739127  | jenkins | v1.35.0 | 27 Jan 25 02:46 UTC |                     |
	|         | --schedule 5m                  |                        |         |         |                     |                     |
	| stop    | -p scheduled-stop-739127       | scheduled-stop-739127  | jenkins | v1.35.0 | 27 Jan 25 02:46 UTC |                     |
	|         | --schedule 15s                 |                        |         |         |                     |                     |
	| stop    | -p scheduled-stop-739127       | scheduled-stop-739127  | jenkins | v1.35.0 | 27 Jan 25 02:46 UTC |                     |
	|         | --schedule 15s                 |                        |         |         |                     |                     |
	| stop    | -p scheduled-stop-739127       | scheduled-stop-739127  | jenkins | v1.35.0 | 27 Jan 25 02:46 UTC |                     |
	|         | --schedule 15s                 |                        |         |         |                     |                     |
	| stop    | -p scheduled-stop-739127       | scheduled-stop-739127  | jenkins | v1.35.0 | 27 Jan 25 02:46 UTC | 27 Jan 25 02:46 UTC |
	|         | --cancel-scheduled             |                        |         |         |                     |                     |
	| stop    | -p scheduled-stop-739127       | scheduled-stop-739127  | jenkins | v1.35.0 | 27 Jan 25 02:47 UTC |                     |
	|         | --schedule 15s                 |                        |         |         |                     |                     |
	| stop    | -p scheduled-stop-739127       | scheduled-stop-739127  | jenkins | v1.35.0 | 27 Jan 25 02:47 UTC |                     |
	|         | --schedule 15s                 |                        |         |         |                     |                     |
	| stop    | -p scheduled-stop-739127       | scheduled-stop-739127  | jenkins | v1.35.0 | 27 Jan 25 02:47 UTC | 27 Jan 25 02:47 UTC |
	|         | --schedule 15s                 |                        |         |         |                     |                     |
	| delete  | -p scheduled-stop-739127       | scheduled-stop-739127  | jenkins | v1.35.0 | 27 Jan 25 02:47 UTC | 27 Jan 25 02:47 UTC |
	| start   | -p offline-crio-922784         | offline-crio-922784    | jenkins | v1.35.0 | 27 Jan 25 02:47 UTC | 27 Jan 25 02:48 UTC |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | -v=1 --memory=2048             |                        |         |         |                     |                     |
	|         | --wait=true --driver=kvm2      |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| start   | -p NoKubernetes-954952         | NoKubernetes-954952    | jenkins | v1.35.0 | 27 Jan 25 02:47 UTC |                     |
	|         | --no-kubernetes                |                        |         |         |                     |                     |
	|         | --kubernetes-version=1.20      |                        |         |         |                     |                     |
	|         | --driver=kvm2                  |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| start   | -p pause-622238 --memory=2048  | pause-622238           | jenkins | v1.35.0 | 27 Jan 25 02:47 UTC | 27 Jan 25 02:49 UTC |
	|         | --install-addons=false         |                        |         |         |                     |                     |
	|         | --wait=all --driver=kvm2       |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| start   | -p NoKubernetes-954952         | NoKubernetes-954952    | jenkins | v1.35.0 | 27 Jan 25 02:47 UTC | 27 Jan 25 02:49 UTC |
	|         | --driver=kvm2                  |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| start   | -p running-upgrade-078958      | minikube               | jenkins | v1.26.0 | 27 Jan 25 02:48 UTC | 27 Jan 25 02:50 UTC |
	|         | --memory=2200 --vm-driver=kvm2 |                        |         |         |                     |                     |
	|         |  --container-runtime=crio      |                        |         |         |                     |                     |
	| delete  | -p offline-crio-922784         | offline-crio-922784    | jenkins | v1.35.0 | 27 Jan 25 02:48 UTC | 27 Jan 25 02:48 UTC |
	| start   | -p stopped-upgrade-883403      | minikube               | jenkins | v1.26.0 | 27 Jan 25 02:48 UTC | 27 Jan 25 02:50 UTC |
	|         | --memory=2200 --vm-driver=kvm2 |                        |         |         |                     |                     |
	|         |  --container-runtime=crio      |                        |         |         |                     |                     |
	| start   | -p NoKubernetes-954952         | NoKubernetes-954952    | jenkins | v1.35.0 | 27 Jan 25 02:49 UTC | 27 Jan 25 02:50 UTC |
	|         | --no-kubernetes --driver=kvm2  |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| start   | -p pause-622238                | pause-622238           | jenkins | v1.35.0 | 27 Jan 25 02:49 UTC | 27 Jan 25 02:50 UTC |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| start   | -p running-upgrade-078958      | running-upgrade-078958 | jenkins | v1.35.0 | 27 Jan 25 02:50 UTC |                     |
	|         | --memory=2200                  |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | -p NoKubernetes-954952         | NoKubernetes-954952    | jenkins | v1.35.0 | 27 Jan 25 02:50 UTC | 27 Jan 25 02:50 UTC |
	| start   | -p NoKubernetes-954952         | NoKubernetes-954952    | jenkins | v1.35.0 | 27 Jan 25 02:50 UTC |                     |
	|         | --no-kubernetes --driver=kvm2  |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| stop    | stopped-upgrade-883403 stop    | minikube               | jenkins | v1.26.0 | 27 Jan 25 02:50 UTC | 27 Jan 25 02:50 UTC |
	| start   | -p stopped-upgrade-883403      | stopped-upgrade-883403 | jenkins | v1.35.0 | 27 Jan 25 02:50 UTC |                     |
	|         | --memory=2200                  |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 02:50:31
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 02:50:31.519215  940301 out.go:345] Setting OutFile to fd 1 ...
	I0127 02:50:31.519315  940301 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:50:31.519320  940301 out.go:358] Setting ErrFile to fd 2...
	I0127 02:50:31.519324  940301 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:50:31.519508  940301 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-897624/.minikube/bin
	I0127 02:50:31.520038  940301 out.go:352] Setting JSON to false
	I0127 02:50:31.521144  940301 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":12774,"bootTime":1737933457,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 02:50:31.521250  940301 start.go:139] virtualization: kvm guest
	I0127 02:50:31.523572  940301 out.go:177] * [stopped-upgrade-883403] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 02:50:31.524756  940301 notify.go:220] Checking for updates...
	I0127 02:50:31.524795  940301 out.go:177]   - MINIKUBE_LOCATION=20316
	I0127 02:50:31.526052  940301 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 02:50:31.527312  940301 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20316-897624/kubeconfig
	I0127 02:50:31.528586  940301 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-897624/.minikube
	I0127 02:50:31.529897  940301 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 02:50:31.531078  940301 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 02:50:31.532745  940301 config.go:182] Loaded profile config "stopped-upgrade-883403": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0127 02:50:31.533347  940301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 02:50:31.533413  940301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:50:31.549435  940301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44901
	I0127 02:50:31.549962  940301 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:50:31.550589  940301 main.go:141] libmachine: Using API Version  1
	I0127 02:50:31.550618  940301 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:50:31.551070  940301 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:50:31.551296  940301 main.go:141] libmachine: (stopped-upgrade-883403) Calling .DriverName
	I0127 02:50:31.553028  940301 out.go:177] * Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	I0127 02:50:31.554232  940301 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 02:50:31.554531  940301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 02:50:31.554569  940301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:50:31.571436  940301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39605
	I0127 02:50:31.572019  940301 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:50:31.572536  940301 main.go:141] libmachine: Using API Version  1
	I0127 02:50:31.572564  940301 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:50:31.573006  940301 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:50:31.573232  940301 main.go:141] libmachine: (stopped-upgrade-883403) Calling .DriverName
	I0127 02:50:31.612735  940301 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 02:50:26.838856  939817 crio.go:462] duration metric: took 2.220657856s to copy over tarball
	I0127 02:50:26.838949  939817 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 02:50:31.479385  939817 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (4.640404948s)
	I0127 02:50:31.479417  939817 crio.go:469] duration metric: took 4.640515582s to extract the tarball
	I0127 02:50:31.479427  939817 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 02:50:31.532051  939817 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 02:50:31.567228  939817 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.1". assuming images are not preloaded.
	I0127 02:50:31.567259  939817 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0127 02:50:31.567346  939817 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 02:50:31.567385  939817 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0127 02:50:31.567404  939817 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0127 02:50:31.567412  939817 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0127 02:50:31.567442  939817 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0127 02:50:31.567461  939817 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0127 02:50:31.567369  939817 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0127 02:50:31.567554  939817 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0127 02:50:31.568793  939817 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0127 02:50:31.568824  939817 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0127 02:50:31.568985  939817 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0127 02:50:31.568988  939817 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0127 02:50:31.569033  939817 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 02:50:31.568803  939817 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0127 02:50:31.569104  939817 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0127 02:50:31.569181  939817 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0127 02:50:31.613977  940301 start.go:297] selected driver: kvm2
	I0127 02:50:31.613999  940301 start.go:901] validating driver "kvm2" against &{Name:stopped-upgrade-883403 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-883
403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.48 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0127 02:50:31.614142  940301 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 02:50:31.615154  940301 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 02:50:31.615249  940301 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20316-897624/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 02:50:31.631242  940301 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 02:50:31.631746  940301 cni.go:84] Creating CNI manager for ""
	I0127 02:50:31.631813  940301 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 02:50:31.631900  940301 start.go:340] cluster config:
	{Name:stopped-upgrade-883403 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-883403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.48 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0127 02:50:31.632055  940301 iso.go:125] acquiring lock: {Name:mkbbe08fa3daa2372045069667a0a52e0e34abd9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 02:50:31.634701  940301 out.go:177] * Starting "stopped-upgrade-883403" primary control-plane node in "stopped-upgrade-883403" cluster
	I0127 02:50:32.674872  939995 main.go:141] libmachine: (NoKubernetes-954952) DBG | domain NoKubernetes-954952 has defined MAC address 52:54:00:f8:f3:cb in network mk-NoKubernetes-954952
	I0127 02:50:32.675375  939995 main.go:141] libmachine: (NoKubernetes-954952) DBG | unable to find current IP address of domain NoKubernetes-954952 in network mk-NoKubernetes-954952
	I0127 02:50:32.675398  939995 main.go:141] libmachine: (NoKubernetes-954952) DBG | I0127 02:50:32.675359  940039 retry.go:31] will retry after 3.895825804s: waiting for domain to come up
	I0127 02:50:32.222491  939626 pod_ready.go:103] pod "coredns-668d6bf9bc-f22h8" in "kube-system" namespace has status "Ready":"False"
	I0127 02:50:33.264748  939626 pod_ready.go:93] pod "coredns-668d6bf9bc-f22h8" in "kube-system" namespace has status "Ready":"True"
	I0127 02:50:33.264779  939626 pod_ready.go:82] duration metric: took 5.548789192s for pod "coredns-668d6bf9bc-f22h8" in "kube-system" namespace to be "Ready" ...
	I0127 02:50:33.264803  939626 pod_ready.go:79] waiting up to 4m0s for pod "etcd-pause-622238" in "kube-system" namespace to be "Ready" ...
	I0127 02:50:35.272412  939626 pod_ready.go:103] pod "etcd-pause-622238" in "kube-system" namespace has status "Ready":"False"
	I0127 02:50:35.771681  939626 pod_ready.go:93] pod "etcd-pause-622238" in "kube-system" namespace has status "Ready":"True"
	I0127 02:50:35.771707  939626 pod_ready.go:82] duration metric: took 2.506896551s for pod "etcd-pause-622238" in "kube-system" namespace to be "Ready" ...
	I0127 02:50:35.771717  939626 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-pause-622238" in "kube-system" namespace to be "Ready" ...
	I0127 02:50:31.635870  940301 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0127 02:50:31.635916  940301 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4
	I0127 02:50:31.635926  940301 cache.go:56] Caching tarball of preloaded images
	I0127 02:50:31.636030  940301 preload.go:172] Found /home/jenkins/minikube-integration/20316-897624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 02:50:31.636045  940301 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on crio
	I0127 02:50:31.636135  940301 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/stopped-upgrade-883403/config.json ...
	I0127 02:50:31.636313  940301 start.go:360] acquireMachinesLock for stopped-upgrade-883403: {Name:mke71211974c740845fb9b00041df14f4e3ecd74 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 02:50:31.783239  939817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0127 02:50:31.788799  939817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0127 02:50:31.790347  939817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0127 02:50:31.796181  939817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0127 02:50:31.801517  939817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0127 02:50:31.807446  939817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0127 02:50:31.812788  939817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0127 02:50:31.926322  939817 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "b4ea7e648530d171b38f67305e22caf49f9d968d71c558e663709b805076538d" in container runtime
	I0127 02:50:31.926395  939817 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0127 02:50:31.926447  939817 ssh_runner.go:195] Run: which crictl
	I0127 02:50:31.987785  939817 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0127 02:50:31.987851  939817 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0127 02:50:31.987847  939817 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "beb86f5d8e6cd2234ca24649b74bd10e1e12446764560a3804d85dd6815d0a18" in container runtime
	I0127 02:50:31.987885  939817 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0127 02:50:31.987902  939817 ssh_runner.go:195] Run: which crictl
	I0127 02:50:31.987933  939817 ssh_runner.go:195] Run: which crictl
	I0127 02:50:32.037925  939817 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "e9f4b425f9192c11c0fa338cabe04f832aa5cea6dcbba2d1bd2a931224421693" in container runtime
	I0127 02:50:32.037966  939817 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0127 02:50:32.037995  939817 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0127 02:50:32.038005  939817 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0127 02:50:32.038036  939817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0127 02:50:32.038057  939817 ssh_runner.go:195] Run: which crictl
	I0127 02:50:32.037941  939817 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "18688a72645c5d34e1cc70d8deb5bef4fc6c9073bb1b53c7812856afc1de1237" in container runtime
	I0127 02:50:32.038109  939817 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0127 02:50:32.038114  939817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.1
	I0127 02:50:32.038041  939817 ssh_runner.go:195] Run: which crictl
	I0127 02:50:32.038119  939817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0127 02:50:32.038152  939817 ssh_runner.go:195] Run: which crictl
	I0127 02:50:32.097031  939817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0127 02:50:32.097077  939817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0127 02:50:32.097117  939817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0127 02:50:32.097174  939817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.1
	I0127 02:50:32.097189  939817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0127 02:50:32.097252  939817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0127 02:50:32.173723  939817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0127 02:50:32.173856  939817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0127 02:50:32.188112  939817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0127 02:50:32.188182  939817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.1
	I0127 02:50:32.188224  939817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0127 02:50:32.188226  939817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0127 02:50:32.258993  939817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0127 02:50:32.259054  939817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0127 02:50:32.259101  939817 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0127 02:50:32.303881  939817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0127 02:50:32.306812  939817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.1
	I0127 02:50:32.307905  939817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0127 02:50:32.307917  939817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0127 02:50:32.333461  939817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.1
	I0127 02:50:32.333528  939817 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0127 02:50:32.333572  939817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (102146048 bytes)
	I0127 02:50:32.361146  939817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0127 02:50:32.361279  939817 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0127 02:50:32.379566  939817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.1
	I0127 02:50:32.398572  939817 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0127 02:50:32.398615  939817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (13586432 bytes)
	I0127 02:50:32.536504  939817 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0127 02:50:32.536596  939817 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0127 02:50:32.706131  939817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 02:50:33.625849  939817 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6: (1.089218267s)
	I0127 02:50:33.625887  939817 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0127 02:50:33.625925  939817 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0127 02:50:33.625998  939817 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0127 02:50:35.680003  939817 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.053967898s)
	I0127 02:50:35.680045  939817 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0127 02:50:35.680096  939817 cache_images.go:92] duration metric: took 4.112821823s to LoadCachedImages
	W0127 02:50:35.680173  939817 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0127 02:50:35.680194  939817 kubeadm.go:934] updating node { 192.168.39.156 8443 v1.24.1 crio true true} ...
	I0127 02:50:35.680319  939817 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=running-upgrade-078958 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.156
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-078958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 02:50:35.680405  939817 ssh_runner.go:195] Run: crio config
	I0127 02:50:35.720474  939817 cni.go:84] Creating CNI manager for ""
	I0127 02:50:35.720509  939817 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 02:50:35.720521  939817 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 02:50:35.720548  939817 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.156 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-078958 NodeName:running-upgrade-078958 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.156"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.156 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 02:50:35.720740  939817 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.156
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "running-upgrade-078958"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.156
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.156"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 02:50:35.720821  939817 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0127 02:50:35.728831  939817 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 02:50:35.728944  939817 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 02:50:35.736344  939817 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0127 02:50:35.749973  939817 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 02:50:35.764163  939817 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0127 02:50:35.780217  939817 ssh_runner.go:195] Run: grep 192.168.39.156	control-plane.minikube.internal$ /etc/hosts
	I0127 02:50:35.783334  939817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 02:50:35.909553  939817 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 02:50:35.922007  939817 certs.go:68] Setting up /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/running-upgrade-078958 for IP: 192.168.39.156
	I0127 02:50:35.922040  939817 certs.go:194] generating shared ca certs ...
	I0127 02:50:35.922064  939817 certs.go:226] acquiring lock for ca certs: {Name:mk2a0a86c4d196cb3cdcd1c836209954479044e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:50:35.922276  939817 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20316-897624/.minikube/ca.key
	I0127 02:50:35.922326  939817 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20316-897624/.minikube/proxy-client-ca.key
	I0127 02:50:35.922338  939817 certs.go:256] generating profile certs ...
	I0127 02:50:35.922432  939817 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/running-upgrade-078958/client.key
	I0127 02:50:35.922458  939817 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/running-upgrade-078958/apiserver.key.71fad329
	I0127 02:50:35.922482  939817 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/running-upgrade-078958/apiserver.crt.71fad329 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.156]
	I0127 02:50:36.026000  939817 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/running-upgrade-078958/apiserver.crt.71fad329 ...
	I0127 02:50:36.026037  939817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/running-upgrade-078958/apiserver.crt.71fad329: {Name:mk6f7f9dc2bf1ddc776a189d909124c00fa38061 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:50:36.026219  939817 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/running-upgrade-078958/apiserver.key.71fad329 ...
	I0127 02:50:36.026234  939817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/running-upgrade-078958/apiserver.key.71fad329: {Name:mkcc6a93879d51b776fbb9cbb1d304cddf8acd1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:50:36.026306  939817 certs.go:381] copying /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/running-upgrade-078958/apiserver.crt.71fad329 -> /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/running-upgrade-078958/apiserver.crt
	I0127 02:50:36.026477  939817 certs.go:385] copying /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/running-upgrade-078958/apiserver.key.71fad329 -> /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/running-upgrade-078958/apiserver.key
	I0127 02:50:36.026623  939817 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/running-upgrade-078958/proxy-client.key
	I0127 02:50:36.026737  939817 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/904889.pem (1338 bytes)
	W0127 02:50:36.026768  939817 certs.go:480] ignoring /home/jenkins/minikube-integration/20316-897624/.minikube/certs/904889_empty.pem, impossibly tiny 0 bytes
	I0127 02:50:36.026778  939817 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 02:50:36.026799  939817 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem (1078 bytes)
	I0127 02:50:36.026833  939817 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/cert.pem (1123 bytes)
	I0127 02:50:36.026854  939817 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/key.pem (1679 bytes)
	I0127 02:50:36.026899  939817 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem (1708 bytes)
	I0127 02:50:36.027547  939817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 02:50:36.055013  939817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 02:50:36.077238  939817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 02:50:36.096526  939817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 02:50:36.117518  939817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/running-upgrade-078958/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0127 02:50:36.148005  939817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/running-upgrade-078958/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 02:50:36.170689  939817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/running-upgrade-078958/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 02:50:36.190458  939817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/running-upgrade-078958/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 02:50:36.211348  939817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 02:50:36.231485  939817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/certs/904889.pem --> /usr/share/ca-certificates/904889.pem (1338 bytes)
	I0127 02:50:36.250627  939817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem --> /usr/share/ca-certificates/9048892.pem (1708 bytes)
	I0127 02:50:36.270518  939817 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 02:50:36.285047  939817 ssh_runner.go:195] Run: openssl version
	I0127 02:50:36.289829  939817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9048892.pem && ln -fs /usr/share/ca-certificates/9048892.pem /etc/ssl/certs/9048892.pem"
	I0127 02:50:36.299555  939817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9048892.pem
	I0127 02:50:36.303658  939817 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 01:57 /usr/share/ca-certificates/9048892.pem
	I0127 02:50:36.303724  939817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9048892.pem
	I0127 02:50:36.308759  939817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9048892.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 02:50:36.317636  939817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 02:50:36.326680  939817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 02:50:36.330673  939817 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 01:48 /usr/share/ca-certificates/minikubeCA.pem
	I0127 02:50:36.330775  939817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 02:50:36.335566  939817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 02:50:36.343261  939817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/904889.pem && ln -fs /usr/share/ca-certificates/904889.pem /etc/ssl/certs/904889.pem"
	I0127 02:50:36.352901  939817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/904889.pem
	I0127 02:50:36.357326  939817 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 01:57 /usr/share/ca-certificates/904889.pem
	I0127 02:50:36.357388  939817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/904889.pem
	I0127 02:50:36.362550  939817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/904889.pem /etc/ssl/certs/51391683.0"
	I0127 02:50:36.369804  939817 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 02:50:36.373575  939817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 02:50:36.379041  939817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 02:50:36.383863  939817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 02:50:36.388730  939817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 02:50:36.394108  939817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 02:50:36.398724  939817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 02:50:36.407602  939817 kubeadm.go:392] StartCluster: {Name:running-upgrade-078958 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-078958 Namespace:defa
ult APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.156 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0127 02:50:36.407690  939817 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 02:50:36.407762  939817 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 02:50:36.435183  939817 cri.go:89] found id: "3fee94a2fa7e4422e88c007f82289b7c12824d207d6dd4c99341c9f7dbd6ab58"
	I0127 02:50:36.435208  939817 cri.go:89] found id: "2f6d9a2c191d6c4b5df4ae47f4282d605578c6983bd404e86efcdd6976766bf8"
	I0127 02:50:36.435212  939817 cri.go:89] found id: "2978847c1472fbd1f55edb9152b02b426763d55b42d6735fd1af38b990794906"
	I0127 02:50:36.435215  939817 cri.go:89] found id: "63c3964a5eabc3335c39cee8e6e2f152f8d5aefa1430ad891496fa3685d4ba91"
	I0127 02:50:36.435218  939817 cri.go:89] found id: ""
	I0127 02:50:36.435272  939817 ssh_runner.go:195] Run: sudo runc list -f json
	I0127 02:50:36.458191  939817 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"2978847c1472fbd1f55edb9152b02b426763d55b42d6735fd1af38b990794906","pid":1052,"status":"running","bundle":"/run/containers/storage/overlay-containers/2978847c1472fbd1f55edb9152b02b426763d55b42d6735fd1af38b990794906/userdata","rootfs":"/var/lib/containers/storage/overlay/5f521f98f9413818d12ffa435880938388ed4aca451d588597ae85e364f7dfe9/merged","created":"2025-01-27T02:49:55.374056791Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"911c4b27","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"911c4b27\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.termina
tionMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"2978847c1472fbd1f55edb9152b02b426763d55b42d6735fd1af38b990794906","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-01-27T02:49:55.279850675Z","io.kubernetes.cri-o.Image":"e9f4b425f9192c11c0fa338cabe04f832aa5cea6dcbba2d1bd2a931224421693","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-apiserver:v1.24.1","io.kubernetes.cri-o.ImageRef":"e9f4b425f9192c11c0fa338cabe04f832aa5cea6dcbba2d1bd2a931224421693","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-running-upgrade-078958\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"aa6b1ad2a3e18dae7ecd309d3ee896a2\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-running-upgrade-078958_aa6b1ad2a3e18dae7ecd309d3ee896a2/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apis
erver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/5f521f98f9413818d12ffa435880938388ed4aca451d588597ae85e364f7dfe9/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-running-upgrade-078958_kube-system_aa6b1ad2a3e18dae7ecd309d3ee896a2_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/9ae1f057d0a7f9f5730f352d0b7272fe308b01e54c2de6ec75b53424bd7ac182/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"9ae1f057d0a7f9f5730f352d0b7272fe308b01e54c2de6ec75b53424bd7ac182","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-running-upgrade-078958_kube-system_aa6b1ad2a3e18dae7ecd309d3ee896a2_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/aa6b1ad2a3e18dae7ecd309d3ee896a2/etc-hosts\",\"readonly\":false},{\"cont
ainer_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/aa6b1ad2a3e18dae7ecd309d3ee896a2/containers/kube-apiserver/fcff9216\",\"readonly\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-apiserver-running-upgrade-078958","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"aa6b1ad2a3e18dae7ecd309d3ee896a2","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.156:8443","kubernetes.io/config.hash":"aa6b1ad2a3e18dae7ecd309d3ee896a2","kubernetes.io/config.seen":"2025-01-27T02:49:53.249113216Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.Collec
tMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2f6d9a2c191d6c4b5df4ae47f4282d605578c6983bd404e86efcdd6976766bf8","pid":1070,"status":"running","bundle":"/run/containers/storage/overlay-containers/2f6d9a2c191d6c4b5df4ae47f4282d605578c6983bd404e86efcdd6976766bf8/userdata","rootfs":"/var/lib/containers/storage/overlay/2936cc5689b3a7c67ee38391192fe7ec4f3141f682b21520111e04fc544f0f22/merged","created":"2025-01-27T02:49:55.484134405Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"eff52b7d","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"eff52b7d\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernete
s.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"2f6d9a2c191d6c4b5df4ae47f4282d605578c6983bd404e86efcdd6976766bf8","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-01-27T02:49:55.39518116Z","io.kubernetes.cri-o.Image":"18688a72645c5d34e1cc70d8deb5bef4fc6c9073bb1b53c7812856afc1de1237","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-scheduler:v1.24.1","io.kubernetes.cri-o.ImageRef":"18688a72645c5d34e1cc70d8deb5bef4fc6c9073bb1b53c7812856afc1de1237","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-running-upgrade-078958\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"90002065a0378229711bc7c07d28de07\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-running-upgrade-078958_90002065a03782
29711bc7c07d28de07/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/2936cc5689b3a7c67ee38391192fe7ec4f3141f682b21520111e04fc544f0f22/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-running-upgrade-078958_kube-system_90002065a0378229711bc7c07d28de07_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/f5c818dbaa9b364c4f0dc32bf3fa13c7e8aef204091ad83c1b33b8452582de9b/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"f5c818dbaa9b364c4f0dc32bf3fa13c7e8aef204091ad83c1b33b8452582de9b","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-running-upgrade-078958_kube-system_90002065a0378229711bc7c07d28de07_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"
/var/lib/kubelet/pods/90002065a0378229711bc7c07d28de07/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/90002065a0378229711bc7c07d28de07/containers/kube-scheduler/2bfe1ac7\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-scheduler-running-upgrade-078958","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"90002065a0378229711bc7c07d28de07","kubernetes.io/config.hash":"90002065a0378229711bc7c07d28de07","kubernetes.io/config.seen":"2025-01-27T02:49:53.249115491Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"
3fee94a2fa7e4422e88c007f82289b7c12824d207d6dd4c99341c9f7dbd6ab58","pid":1101,"status":"running","bundle":"/run/containers/storage/overlay-containers/3fee94a2fa7e4422e88c007f82289b7c12824d207d6dd4c99341c9f7dbd6ab58/userdata","rootfs":"/var/lib/containers/storage/overlay/609abfee6174fb3fe9afadd2b9599cc3ad5ebc4e0631eddaed4067020b343dda/merged","created":"2025-01-27T02:49:55.87661325Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"4d0dbe90","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"4d0dbe90\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-
o.ContainerID":"3fee94a2fa7e4422e88c007f82289b7c12824d207d6dd4c99341c9f7dbd6ab58","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-01-27T02:49:55.675511865Z","io.kubernetes.cri-o.Image":"aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/etcd:3.5.3-0","io.kubernetes.cri-o.ImageRef":"aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-running-upgrade-078958\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"df8e097e73221d9c081cff50339554fb\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-running-upgrade-078958_df8e097e73221d9c081cff50339554fb/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/609abfee6174fb3fe9afadd2b9599cc3ad5ebc4e0631eddaed4067020b343dda/merged","io.kuber
netes.cri-o.Name":"k8s_etcd_etcd-running-upgrade-078958_kube-system_df8e097e73221d9c081cff50339554fb_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/5fb9d22f297310f96fb8c4664cabe5be60ff6d7d1d4bb9a865363f723c040839/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"5fb9d22f297310f96fb8c4664cabe5be60ff6d7d1d4bb9a865363f723c040839","io.kubernetes.cri-o.SandboxName":"k8s_etcd-running-upgrade-078958_kube-system_df8e097e73221d9c081cff50339554fb_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/df8e097e73221d9c081cff50339554fb/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/df8e097e73221d9c081cff50339554fb/containers/etcd/dc4d494d\",\"readonly\":false},{\"container_path\":\"/var/lib/minik
ube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false}]","io.kubernetes.pod.name":"etcd-running-upgrade-078958","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"df8e097e73221d9c081cff50339554fb","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.156:2379","kubernetes.io/config.hash":"df8e097e73221d9c081cff50339554fb","kubernetes.io/config.seen":"2025-01-27T02:49:53.249067788Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5fb9d22f297310f96fb8c4664cabe5be60ff6d7d1d4bb9a865363f723c040839","pid":979,"status":"running","bundle":"/run/containers
/storage/overlay-containers/5fb9d22f297310f96fb8c4664cabe5be60ff6d7d1d4bb9a865363f723c040839/userdata","rootfs":"/var/lib/containers/storage/overlay/fdb5851d5627361ee0dc37f342541b619002dd58fb27402b319f697cb48bbf2c/merged","created":"2025-01-27T02:49:54.816195525Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.39.156:2379\",\"kubernetes.io/config.seen\":\"2025-01-27T02:49:53.249067788Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"df8e097e73221d9c081cff50339554fb\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-poddf8e097e73221d9c081cff50339554fb.slice","io.kubernetes.cri-o.ContainerID":"5fb9d22f297310f96fb8c4664cabe5be60ff6d7d1d4bb9a865363f723c040839","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-running-upgrade-078958_kube-system_df8e097e73221d9c081cff50339554fb_0","io.kubernetes.cri-o.ContainerType":"sand
box","io.kubernetes.cri-o.Created":"2025-01-27T02:49:54.619335447Z","io.kubernetes.cri-o.HostName":"running-upgrade-078958","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/5fb9d22f297310f96fb8c4664cabe5be60ff6d7d1d4bb9a865363f723c040839/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"etcd-running-upgrade-078958","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"df8e097e73221d9c081cff50339554fb\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"etcd-running-upgrade-078958\",\"component\":\"etcd\",\"tier\":\"control-plane\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-running-upgrade-078958_df8e097e73221d9c081cff50339554fb/5fb9d22f297310f96fb8c4664cabe5be60ff6d7d1d4bb9a865363f723c040839.log","io.kubernetes.cri-o.Metadata":"{\"Name\":\"etcd-running-upgrade-078958\",\"UID\":\"df8e097e73221d
9c081cff50339554fb\",\"Namespace\":\"kube-system\",\"Attempt\":0}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/fdb5851d5627361ee0dc37f342541b619002dd58fb27402b319f697cb48bbf2c/merged","io.kubernetes.cri-o.Name":"k8s_etcd-running-upgrade-078958_kube-system_df8e097e73221d9c081cff50339554fb_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/5fb9d22f297310f96fb8c4664cabe5be60ff6d7d1d4bb9a865363f723c040839/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"5fb9d22f297310f96fb8c4664cabe5be60ff6d7d1d4bb9a865363f723c040839","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/5fb9d22f297310f96fb8c4664cabe5be60ff6d7d1d4bb9a865363f723c0408
39/userdata/shm","io.kubernetes.pod.name":"etcd-running-upgrade-078958","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"df8e097e73221d9c081cff50339554fb","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.156:2379","kubernetes.io/config.hash":"df8e097e73221d9c081cff50339554fb","kubernetes.io/config.seen":"2025-01-27T02:49:53.249067788Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"63c3964a5eabc3335c39cee8e6e2f152f8d5aefa1430ad891496fa3685d4ba91","pid":1017,"status":"running","bundle":"/run/containers/storage/overlay-containers/63c3964a5eabc3335c39cee8e6e2f152f8d5aefa1430ad891496fa3685d4ba91/userdata","rootfs":"/var/lib/containers/storage/overlay/6c5d9e74224779bd114d5c13f236188a7e8f5445f5a33a081a0978791a795136/merged","created":"2025-01-27T02:49:55.237331165Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"1c682979","io
.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"1c682979\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"63c3964a5eabc3335c39cee8e6e2f152f8d5aefa1430ad891496fa3685d4ba91","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-01-27T02:49:55.13951357Z","io.kubernetes.cri-o.Image":"b4ea7e648530d171b38f67305e22caf49f9d968d71c558e663709b805076538d","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-controller-manager:v1.24.1","io.kubernetes.cri-o.ImageRef":"b4ea7e648530d171b38f67305e22caf49f9d968d71c558e663709b8050765
38d","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-running-upgrade-078958\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"2b90f71684ea3d808f7d6100624bc03d\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-running-upgrade-078958_2b90f71684ea3d808f7d6100624bc03d/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/6c5d9e74224779bd114d5c13f236188a7e8f5445f5a33a081a0978791a795136/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-running-upgrade-078958_kube-system_2b90f71684ea3d808f7d6100624bc03d_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/fe029dd9ec6645e1c570dbce1f8d08df55760c9ca7ed380cc4c1579176201780/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"f
e029dd9ec6645e1c570dbce1f8d08df55760c9ca7ed380cc4c1579176201780","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-running-upgrade-078958_kube-system_2b90f71684ea3d808f7d6100624bc03d_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/2b90f71684ea3d808f7d6100624bc03d/containers/kube-controller-manager/bbf0657d\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/2b90f71684ea3d808f7d6100624bc03d/etc-hosts\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/ssl/certs\",\"host_pa
th\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false}]","io.kubernetes.pod.name":"kube-controller-manager-running-upgrade-078958","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"2b90f71684ea3d808f7d6100624bc03d","kubernetes.io/config.hash":"2b90f71684ea3d808f7d6100624bc03d","kubernetes.io/config.seen":"2025-01-27T02:49:53.249114461Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9ae1f057d0a7f9f5730f352d0b7272fe308b01e54c2de6ec75b53424
bd7ac182","pid":971,"status":"running","bundle":"/run/containers/storage/overlay-containers/9ae1f057d0a7f9f5730f352d0b7272fe308b01e54c2de6ec75b53424bd7ac182/userdata","rootfs":"/var/lib/containers/storage/overlay/74a6482c994a439196b12a22b25dd2f7c46b5c76ac8a31bbdb7e525931de5bcc/merged","created":"2025-01-27T02:49:54.76796547Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2025-01-27T02:49:53.249113216Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"aa6b1ad2a3e18dae7ecd309d3ee896a2\",\"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\":\"192.168.39.156:8443\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-podaa6b1ad2a3e18dae7ecd309d3ee896a2.slice","io.kubernetes.cri-o.ContainerID":"9ae1f057d0a7f9f5730f352d0b7272fe308b01e54c2de6ec75b53424bd7ac182","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-running-upgrade-078958
_kube-system_aa6b1ad2a3e18dae7ecd309d3ee896a2_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2025-01-27T02:49:54.596164276Z","io.kubernetes.cri-o.HostName":"running-upgrade-078958","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/9ae1f057d0a7f9f5730f352d0b7272fe308b01e54c2de6ec75b53424bd7ac182/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-apiserver-running-upgrade-078958","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-apiserver-running-upgrade-078958\",\"component\":\"kube-apiserver\",\"tier\":\"control-plane\",\"io.kubernetes.pod.uid\":\"aa6b1ad2a3e18dae7ecd309d3ee896a2\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-running-upgrade-078958_aa6b1ad2a3e18dae7ecd309d3ee896a2/9ae1f057d0a7f9f5730f352d0b7272fe308b01
e54c2de6ec75b53424bd7ac182.log","io.kubernetes.cri-o.Metadata":"{\"Name\":\"kube-apiserver-running-upgrade-078958\",\"UID\":\"aa6b1ad2a3e18dae7ecd309d3ee896a2\",\"Namespace\":\"kube-system\",\"Attempt\":0}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/74a6482c994a439196b12a22b25dd2f7c46b5c76ac8a31bbdb7e525931de5bcc/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-running-upgrade-078958_kube-system_aa6b1ad2a3e18dae7ecd309d3ee896a2_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/9ae1f057d0a7f9f5730f352d0b7272fe308b01e54c2de6ec75b53424bd7ac182/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"9ae1f057d0a7f9f5730f352d0b7272fe308b01e54c2de6ec75b53424bd7ac182","io.kubernetes.cri-o.SeccompProfilePath":"runtime
/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/9ae1f057d0a7f9f5730f352d0b7272fe308b01e54c2de6ec75b53424bd7ac182/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-running-upgrade-078958","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"aa6b1ad2a3e18dae7ecd309d3ee896a2","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.156:8443","kubernetes.io/config.hash":"aa6b1ad2a3e18dae7ecd309d3ee896a2","kubernetes.io/config.seen":"2025-01-27T02:49:53.249113216Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f5c818dbaa9b364c4f0dc32bf3fa13c7e8aef204091ad83c1b33b8452582de9b","pid":945,"status":"running","bundle":"/run/containers/storage/overlay-containers/f5c818dbaa9b364c4f0dc32bf3fa13c7e8aef204091ad83c1b33b8452582de9b/userdata","rootfs":"/var/lib/containers/storage/overlay/a69df51a0a5afd3a1644e9dd576301ab5496a9916
53d68c1594be7d9d009b7ac/merged","created":"2025-01-27T02:49:54.68111673Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2025-01-27T02:49:53.249115491Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"90002065a0378229711bc7c07d28de07\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pod90002065a0378229711bc7c07d28de07.slice","io.kubernetes.cri-o.ContainerID":"f5c818dbaa9b364c4f0dc32bf3fa13c7e8aef204091ad83c1b33b8452582de9b","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-running-upgrade-078958_kube-system_90002065a0378229711bc7c07d28de07_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2025-01-27T02:49:54.591371001Z","io.kubernetes.cri-o.HostName":"running-upgrade-078958","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/f5c818db
aa9b364c4f0dc32bf3fa13c7e8aef204091ad83c1b33b8452582de9b/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-scheduler-running-upgrade-078958","io.kubernetes.cri-o.Labels":"{\"component\":\"kube-scheduler\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"90002065a0378229711bc7c07d28de07\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-scheduler-running-upgrade-078958\",\"tier\":\"control-plane\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-running-upgrade-078958_90002065a0378229711bc7c07d28de07/f5c818dbaa9b364c4f0dc32bf3fa13c7e8aef204091ad83c1b33b8452582de9b.log","io.kubernetes.cri-o.Metadata":"{\"Name\":\"kube-scheduler-running-upgrade-078958\",\"UID\":\"90002065a0378229711bc7c07d28de07\",\"Namespace\":\"kube-system\",\"Attempt\":0}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/a69df51a0a5afd3a1644e9dd576301ab5496a991653d68c1594be7d9d009b7ac/merg
ed","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-running-upgrade-078958_kube-system_90002065a0378229711bc7c07d28de07_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/f5c818dbaa9b364c4f0dc32bf3fa13c7e8aef204091ad83c1b33b8452582de9b/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"f5c818dbaa9b364c4f0dc32bf3fa13c7e8aef204091ad83c1b33b8452582de9b","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/f5c818dbaa9b364c4f0dc32bf3fa13c7e8aef204091ad83c1b33b8452582de9b/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-running-upgrade-078958","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"90002065a0378229711bc7c07d28de07","kubernetes
.io/config.hash":"90002065a0378229711bc7c07d28de07","kubernetes.io/config.seen":"2025-01-27T02:49:53.249115491Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fe029dd9ec6645e1c570dbce1f8d08df55760c9ca7ed380cc4c1579176201780","pid":980,"status":"running","bundle":"/run/containers/storage/overlay-containers/fe029dd9ec6645e1c570dbce1f8d08df55760c9ca7ed380cc4c1579176201780/userdata","rootfs":"/var/lib/containers/storage/overlay/0989ec2728a9921e2b9900c2efc976d857753d007a74b492674c47d22ff70fb7/merged","created":"2025-01-27T02:49:54.804216011Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"2b90f71684ea3d808f7d6100624bc03d\",\"kubernetes.io/config.seen\":\"2025-01-27T02:49:53.249114461Z\"}","io.kubernetes.cri-o.Cgroup
Parent":"kubepods-burstable-pod2b90f71684ea3d808f7d6100624bc03d.slice","io.kubernetes.cri-o.ContainerID":"fe029dd9ec6645e1c570dbce1f8d08df55760c9ca7ed380cc4c1579176201780","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-running-upgrade-078958_kube-system_2b90f71684ea3d808f7d6100624bc03d_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2025-01-27T02:49:54.628754741Z","io.kubernetes.cri-o.HostName":"running-upgrade-078958","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/fe029dd9ec6645e1c570dbce1f8d08df55760c9ca7ed380cc4c1579176201780/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-controller-manager-running-upgrade-078958","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.name\":\"kube-controller-manager-running-upgrade-078958\",\"tier\":\"control-plane\",\"component\":\"kube-controller-manager\",\"io.kubernetes.container.name
\":\"POD\",\"io.kubernetes.pod.uid\":\"2b90f71684ea3d808f7d6100624bc03d\",\"io.kubernetes.pod.namespace\":\"kube-system\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-running-upgrade-078958_2b90f71684ea3d808f7d6100624bc03d/fe029dd9ec6645e1c570dbce1f8d08df55760c9ca7ed380cc4c1579176201780.log","io.kubernetes.cri-o.Metadata":"{\"Name\":\"kube-controller-manager-running-upgrade-078958\",\"UID\":\"2b90f71684ea3d808f7d6100624bc03d\",\"Namespace\":\"kube-system\",\"Attempt\":0}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/0989ec2728a9921e2b9900c2efc976d857753d007a74b492674c47d22ff70fb7/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-running-upgrade-078958_kube-system_2b90f71684ea3d808f7d6100624bc03d_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/
run/containers/storage/overlay-containers/fe029dd9ec6645e1c570dbce1f8d08df55760c9ca7ed380cc4c1579176201780/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"fe029dd9ec6645e1c570dbce1f8d08df55760c9ca7ed380cc4c1579176201780","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/fe029dd9ec6645e1c570dbce1f8d08df55760c9ca7ed380cc4c1579176201780/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-running-upgrade-078958","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"2b90f71684ea3d808f7d6100624bc03d","kubernetes.io/config.hash":"2b90f71684ea3d808f7d6100624bc03d","kubernetes.io/config.seen":"2025-01-27T02:49:53.249114461Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"}]
	I0127 02:50:36.458674  939817 cri.go:126] list returned 8 containers
	I0127 02:50:36.458694  939817 cri.go:129] container: {ID:2978847c1472fbd1f55edb9152b02b426763d55b42d6735fd1af38b990794906 Status:running}
	I0127 02:50:36.458728  939817 cri.go:135] skipping {2978847c1472fbd1f55edb9152b02b426763d55b42d6735fd1af38b990794906 running}: state = "running", want "paused"
	I0127 02:50:36.458745  939817 cri.go:129] container: {ID:2f6d9a2c191d6c4b5df4ae47f4282d605578c6983bd404e86efcdd6976766bf8 Status:running}
	I0127 02:50:36.458752  939817 cri.go:135] skipping {2f6d9a2c191d6c4b5df4ae47f4282d605578c6983bd404e86efcdd6976766bf8 running}: state = "running", want "paused"
	I0127 02:50:36.458760  939817 cri.go:129] container: {ID:3fee94a2fa7e4422e88c007f82289b7c12824d207d6dd4c99341c9f7dbd6ab58 Status:running}
	I0127 02:50:36.458769  939817 cri.go:135] skipping {3fee94a2fa7e4422e88c007f82289b7c12824d207d6dd4c99341c9f7dbd6ab58 running}: state = "running", want "paused"
	I0127 02:50:36.458774  939817 cri.go:129] container: {ID:5fb9d22f297310f96fb8c4664cabe5be60ff6d7d1d4bb9a865363f723c040839 Status:running}
	I0127 02:50:36.458781  939817 cri.go:131] skipping 5fb9d22f297310f96fb8c4664cabe5be60ff6d7d1d4bb9a865363f723c040839 - not in ps
	I0127 02:50:36.458814  939817 cri.go:129] container: {ID:63c3964a5eabc3335c39cee8e6e2f152f8d5aefa1430ad891496fa3685d4ba91 Status:running}
	I0127 02:50:36.458829  939817 cri.go:135] skipping {63c3964a5eabc3335c39cee8e6e2f152f8d5aefa1430ad891496fa3685d4ba91 running}: state = "running", want "paused"
	I0127 02:50:36.458836  939817 cri.go:129] container: {ID:9ae1f057d0a7f9f5730f352d0b7272fe308b01e54c2de6ec75b53424bd7ac182 Status:running}
	I0127 02:50:36.458845  939817 cri.go:131] skipping 9ae1f057d0a7f9f5730f352d0b7272fe308b01e54c2de6ec75b53424bd7ac182 - not in ps
	I0127 02:50:36.458852  939817 cri.go:129] container: {ID:f5c818dbaa9b364c4f0dc32bf3fa13c7e8aef204091ad83c1b33b8452582de9b Status:running}
	I0127 02:50:36.458858  939817 cri.go:131] skipping f5c818dbaa9b364c4f0dc32bf3fa13c7e8aef204091ad83c1b33b8452582de9b - not in ps
	I0127 02:50:36.458864  939817 cri.go:129] container: {ID:fe029dd9ec6645e1c570dbce1f8d08df55760c9ca7ed380cc4c1579176201780 Status:running}
	I0127 02:50:36.458878  939817 cri.go:131] skipping fe029dd9ec6645e1c570dbce1f8d08df55760c9ca7ed380cc4c1579176201780 - not in ps
	I0127 02:50:36.458934  939817 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0127 02:50:36.467489  939817 kubeadm.go:405] apiserver tunnel failed: apiserver port not set
	I0127 02:50:36.467518  939817 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 02:50:36.467525  939817 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 02:50:36.467582  939817 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 02:50:36.474914  939817 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 02:50:36.475554  939817 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-078958" does not appear in /home/jenkins/minikube-integration/20316-897624/kubeconfig
	I0127 02:50:36.475799  939817 kubeconfig.go:62] /home/jenkins/minikube-integration/20316-897624/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-078958" cluster setting kubeconfig missing "running-upgrade-078958" context setting]
	I0127 02:50:36.476216  939817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/kubeconfig: {Name:mkcd87672341028e989830590061bc5270733978 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:50:36.477114  939817 kapi.go:59] client config for running-upgrade-078958: &rest.Config{Host:"https://192.168.39.156:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20316-897624/.minikube/profiles/running-upgrade-078958/client.crt", KeyFile:"/home/jenkins/minikube-integration/20316-897624/.minikube/profiles/running-upgrade-078958/client.key", CAFile:"/home/jenkins/minikube-integration/20316-897624/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c3e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0127 02:50:36.477803  939817 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 02:50:36.485206  939817 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/crio/crio.sock
	+  criSocket: unix:///var/run/crio/crio.sock
	   name: "running-upgrade-078958"
	   kubeletExtraArgs:
	     node-ip: 192.168.39.156
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0127 02:50:36.485228  939817 kubeadm.go:1160] stopping kube-system containers ...
	I0127 02:50:36.485244  939817 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0127 02:50:36.485295  939817 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 02:50:36.513719  939817 cri.go:89] found id: "3fee94a2fa7e4422e88c007f82289b7c12824d207d6dd4c99341c9f7dbd6ab58"
	I0127 02:50:36.513744  939817 cri.go:89] found id: "2f6d9a2c191d6c4b5df4ae47f4282d605578c6983bd404e86efcdd6976766bf8"
	I0127 02:50:36.513748  939817 cri.go:89] found id: "2978847c1472fbd1f55edb9152b02b426763d55b42d6735fd1af38b990794906"
	I0127 02:50:36.513751  939817 cri.go:89] found id: "63c3964a5eabc3335c39cee8e6e2f152f8d5aefa1430ad891496fa3685d4ba91"
	I0127 02:50:36.513754  939817 cri.go:89] found id: ""
	I0127 02:50:36.513759  939817 cri.go:252] Stopping containers: [3fee94a2fa7e4422e88c007f82289b7c12824d207d6dd4c99341c9f7dbd6ab58 2f6d9a2c191d6c4b5df4ae47f4282d605578c6983bd404e86efcdd6976766bf8 2978847c1472fbd1f55edb9152b02b426763d55b42d6735fd1af38b990794906 63c3964a5eabc3335c39cee8e6e2f152f8d5aefa1430ad891496fa3685d4ba91]
	I0127 02:50:36.513826  939817 ssh_runner.go:195] Run: which crictl
	I0127 02:50:36.517200  939817 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 3fee94a2fa7e4422e88c007f82289b7c12824d207d6dd4c99341c9f7dbd6ab58 2f6d9a2c191d6c4b5df4ae47f4282d605578c6983bd404e86efcdd6976766bf8 2978847c1472fbd1f55edb9152b02b426763d55b42d6735fd1af38b990794906 63c3964a5eabc3335c39cee8e6e2f152f8d5aefa1430ad891496fa3685d4ba91
	I0127 02:50:36.777420  939626 pod_ready.go:93] pod "kube-apiserver-pause-622238" in "kube-system" namespace has status "Ready":"True"
	I0127 02:50:36.777444  939626 pod_ready.go:82] duration metric: took 1.005719886s for pod "kube-apiserver-pause-622238" in "kube-system" namespace to be "Ready" ...
	I0127 02:50:36.777453  939626 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-pause-622238" in "kube-system" namespace to be "Ready" ...
	I0127 02:50:38.284567  939626 pod_ready.go:93] pod "kube-controller-manager-pause-622238" in "kube-system" namespace has status "Ready":"True"
	I0127 02:50:38.284605  939626 pod_ready.go:82] duration metric: took 1.507143563s for pod "kube-controller-manager-pause-622238" in "kube-system" namespace to be "Ready" ...
	I0127 02:50:38.284621  939626 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-9pg5p" in "kube-system" namespace to be "Ready" ...
	I0127 02:50:38.291489  939626 pod_ready.go:93] pod "kube-proxy-9pg5p" in "kube-system" namespace has status "Ready":"True"
	I0127 02:50:38.291522  939626 pod_ready.go:82] duration metric: took 6.891907ms for pod "kube-proxy-9pg5p" in "kube-system" namespace to be "Ready" ...
	I0127 02:50:38.291536  939626 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-pause-622238" in "kube-system" namespace to be "Ready" ...
	I0127 02:50:38.797432  939626 pod_ready.go:93] pod "kube-scheduler-pause-622238" in "kube-system" namespace has status "Ready":"True"
	I0127 02:50:38.797459  939626 pod_ready.go:82] duration metric: took 505.914331ms for pod "kube-scheduler-pause-622238" in "kube-system" namespace to be "Ready" ...
	I0127 02:50:38.797467  939626 pod_ready.go:39] duration metric: took 11.090421806s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 02:50:38.797486  939626 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 02:50:38.809482  939626 ops.go:34] apiserver oom_adj: -16
	I0127 02:50:38.809503  939626 kubeadm.go:597] duration metric: took 19.769699245s to restartPrimaryControlPlane
	I0127 02:50:38.809512  939626 kubeadm.go:394] duration metric: took 19.871243976s to StartCluster
	I0127 02:50:38.809532  939626 settings.go:142] acquiring lock: {Name:mk329e04da29656a59534015ea2d4cccfe5debac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:50:38.809608  939626 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20316-897624/kubeconfig
	I0127 02:50:38.810379  939626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/kubeconfig: {Name:mkcd87672341028e989830590061bc5270733978 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:50:38.810611  939626 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.58 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 02:50:38.810692  939626 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 02:50:38.810908  939626 config.go:182] Loaded profile config "pause-622238": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 02:50:38.812090  939626 out.go:177] * Verifying Kubernetes components...
	I0127 02:50:38.812797  939626 out.go:177] * Enabled addons: 
	I0127 02:50:36.572853  939995 main.go:141] libmachine: (NoKubernetes-954952) DBG | domain NoKubernetes-954952 has defined MAC address 52:54:00:f8:f3:cb in network mk-NoKubernetes-954952
	I0127 02:50:36.573505  939995 main.go:141] libmachine: (NoKubernetes-954952) DBG | unable to find current IP address of domain NoKubernetes-954952 in network mk-NoKubernetes-954952
	I0127 02:50:36.573556  939995 main.go:141] libmachine: (NoKubernetes-954952) DBG | I0127 02:50:36.573490  940039 retry.go:31] will retry after 4.167241164s: waiting for domain to come up
	I0127 02:50:40.746019  939995 main.go:141] libmachine: (NoKubernetes-954952) DBG | domain NoKubernetes-954952 has defined MAC address 52:54:00:f8:f3:cb in network mk-NoKubernetes-954952
	I0127 02:50:40.746503  939995 main.go:141] libmachine: (NoKubernetes-954952) found domain IP: 192.168.61.132
	I0127 02:50:40.746525  939995 main.go:141] libmachine: (NoKubernetes-954952) DBG | domain NoKubernetes-954952 has current primary IP address 192.168.61.132 and MAC address 52:54:00:f8:f3:cb in network mk-NoKubernetes-954952
	I0127 02:50:40.746532  939995 main.go:141] libmachine: (NoKubernetes-954952) reserving static IP address...
	I0127 02:50:40.746880  939995 main.go:141] libmachine: (NoKubernetes-954952) DBG | unable to find host DHCP lease matching {name: "NoKubernetes-954952", mac: "52:54:00:f8:f3:cb", ip: "192.168.61.132"} in network mk-NoKubernetes-954952
	I0127 02:50:38.813525  939626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 02:50:38.814082  939626 addons.go:514] duration metric: took 3.40522ms for enable addons: enabled=[]
	I0127 02:50:38.959905  939626 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 02:50:38.976352  939626 node_ready.go:35] waiting up to 6m0s for node "pause-622238" to be "Ready" ...
	I0127 02:50:38.979219  939626 node_ready.go:49] node "pause-622238" has status "Ready":"True"
	I0127 02:50:38.979241  939626 node_ready.go:38] duration metric: took 2.837588ms for node "pause-622238" to be "Ready" ...
	I0127 02:50:38.979253  939626 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 02:50:38.984174  939626 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-f22h8" in "kube-system" namespace to be "Ready" ...
	I0127 02:50:38.988546  939626 pod_ready.go:93] pod "coredns-668d6bf9bc-f22h8" in "kube-system" namespace has status "Ready":"True"
	I0127 02:50:38.988569  939626 pod_ready.go:82] duration metric: took 4.366573ms for pod "coredns-668d6bf9bc-f22h8" in "kube-system" namespace to be "Ready" ...
	I0127 02:50:38.988580  939626 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-622238" in "kube-system" namespace to be "Ready" ...
	I0127 02:50:39.368411  939626 pod_ready.go:93] pod "etcd-pause-622238" in "kube-system" namespace has status "Ready":"True"
	I0127 02:50:39.368448  939626 pod_ready.go:82] duration metric: took 379.85883ms for pod "etcd-pause-622238" in "kube-system" namespace to be "Ready" ...
	I0127 02:50:39.368463  939626 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-622238" in "kube-system" namespace to be "Ready" ...
	I0127 02:50:39.768838  939626 pod_ready.go:93] pod "kube-apiserver-pause-622238" in "kube-system" namespace has status "Ready":"True"
	I0127 02:50:39.768887  939626 pod_ready.go:82] duration metric: took 400.414809ms for pod "kube-apiserver-pause-622238" in "kube-system" namespace to be "Ready" ...
	I0127 02:50:39.768905  939626 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-622238" in "kube-system" namespace to be "Ready" ...
	I0127 02:50:40.168617  939626 pod_ready.go:93] pod "kube-controller-manager-pause-622238" in "kube-system" namespace has status "Ready":"True"
	I0127 02:50:40.168647  939626 pod_ready.go:82] duration metric: took 399.732256ms for pod "kube-controller-manager-pause-622238" in "kube-system" namespace to be "Ready" ...
	I0127 02:50:40.168660  939626 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9pg5p" in "kube-system" namespace to be "Ready" ...
	I0127 02:50:40.569146  939626 pod_ready.go:93] pod "kube-proxy-9pg5p" in "kube-system" namespace has status "Ready":"True"
	I0127 02:50:40.569177  939626 pod_ready.go:82] duration metric: took 400.507282ms for pod "kube-proxy-9pg5p" in "kube-system" namespace to be "Ready" ...
	I0127 02:50:40.569191  939626 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-622238" in "kube-system" namespace to be "Ready" ...
	W0127 02:50:36.708916  939817 kubeadm.go:644] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 3fee94a2fa7e4422e88c007f82289b7c12824d207d6dd4c99341c9f7dbd6ab58 2f6d9a2c191d6c4b5df4ae47f4282d605578c6983bd404e86efcdd6976766bf8 2978847c1472fbd1f55edb9152b02b426763d55b42d6735fd1af38b990794906 63c3964a5eabc3335c39cee8e6e2f152f8d5aefa1430ad891496fa3685d4ba91: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-01-27T02:50:36Z" level=fatal msg="stopping the container \"3fee94a2fa7e4422e88c007f82289b7c12824d207d6dd4c99341c9f7dbd6ab58\": rpc error: code = Unknown desc = failed to unmount container 3fee94a2fa7e4422e88c007f82289b7c12824d207d6dd4c99341c9f7dbd6ab58: layer not known"
	I0127 02:50:36.709017  939817 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 02:50:36.738557  939817 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 02:50:36.747193  939817 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Jan 27 02:49 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5654 Jan 27 02:49 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Jan 27 02:50 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 Jan 27 02:49 /etc/kubernetes/scheduler.conf
	
	I0127 02:50:36.747273  939817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/admin.conf
	I0127 02:50:36.754560  939817 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0127 02:50:36.754632  939817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 02:50:36.763607  939817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/kubelet.conf
	I0127 02:50:36.772262  939817 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0127 02:50:36.772334  939817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 02:50:36.781343  939817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/controller-manager.conf
	I0127 02:50:36.788670  939817 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0127 02:50:36.788724  939817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 02:50:36.795927  939817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/scheduler.conf
	I0127 02:50:36.803058  939817 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0127 02:50:36.803111  939817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 02:50:36.811040  939817 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 02:50:36.818693  939817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 02:50:36.906058  939817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 02:50:37.482822  939817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 02:50:37.847115  939817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 02:50:37.908289  939817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 02:50:38.022247  939817 api_server.go:52] waiting for apiserver process to appear ...
	I0127 02:50:38.022341  939817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 02:50:38.044489  939817 api_server.go:72] duration metric: took 22.240139ms to wait for apiserver process to appear ...
	I0127 02:50:38.044519  939817 api_server.go:88] waiting for apiserver healthz status ...
	I0127 02:50:38.044541  939817 api_server.go:253] Checking apiserver healthz at https://192.168.39.156:8443/healthz ...
	I0127 02:50:40.054649  939817 api_server.go:279] https://192.168.39.156:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 02:50:40.054695  939817 api_server.go:103] status: https://192.168.39.156:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 02:50:40.054717  939817 api_server.go:253] Checking apiserver healthz at https://192.168.39.156:8443/healthz ...
	I0127 02:50:40.968553  939626 pod_ready.go:93] pod "kube-scheduler-pause-622238" in "kube-system" namespace has status "Ready":"True"
	I0127 02:50:40.968578  939626 pod_ready.go:82] duration metric: took 399.378375ms for pod "kube-scheduler-pause-622238" in "kube-system" namespace to be "Ready" ...
	I0127 02:50:40.968587  939626 pod_ready.go:39] duration metric: took 1.989321738s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 02:50:40.968602  939626 api_server.go:52] waiting for apiserver process to appear ...
	I0127 02:50:40.968663  939626 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 02:50:40.982963  939626 api_server.go:72] duration metric: took 2.172313148s to wait for apiserver process to appear ...
	I0127 02:50:40.982991  939626 api_server.go:88] waiting for apiserver healthz status ...
	I0127 02:50:40.983011  939626 api_server.go:253] Checking apiserver healthz at https://192.168.50.58:8443/healthz ...
	I0127 02:50:40.988340  939626 api_server.go:279] https://192.168.50.58:8443/healthz returned 200:
	ok
	I0127 02:50:40.989372  939626 api_server.go:141] control plane version: v1.32.1
	I0127 02:50:40.989397  939626 api_server.go:131] duration metric: took 6.396298ms to wait for apiserver health ...
	I0127 02:50:40.989406  939626 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 02:50:41.170415  939626 system_pods.go:59] 6 kube-system pods found
	I0127 02:50:41.170447  939626 system_pods.go:61] "coredns-668d6bf9bc-f22h8" [8dad4f16-9af8-4d93-9cf5-232c0a0935a0] Running
	I0127 02:50:41.170452  939626 system_pods.go:61] "etcd-pause-622238" [946278f4-86ff-43c3-b930-a9a0d1214046] Running
	I0127 02:50:41.170456  939626 system_pods.go:61] "kube-apiserver-pause-622238" [fc330ca3-25d8-4cb9-ae4e-35832b066331] Running
	I0127 02:50:41.170459  939626 system_pods.go:61] "kube-controller-manager-pause-622238" [f7e24926-6b27-4133-b1ea-967e10c0efab] Running
	I0127 02:50:41.170463  939626 system_pods.go:61] "kube-proxy-9pg5p" [b532db91-62b9-4bee-bbc9-1613f5989325] Running
	I0127 02:50:41.170466  939626 system_pods.go:61] "kube-scheduler-pause-622238" [a850b25b-fc22-4574-a131-70861f2c285a] Running
	I0127 02:50:41.170473  939626 system_pods.go:74] duration metric: took 181.060834ms to wait for pod list to return data ...
	I0127 02:50:41.170481  939626 default_sa.go:34] waiting for default service account to be created ...
	I0127 02:50:41.368731  939626 default_sa.go:45] found service account: "default"
	I0127 02:50:41.368768  939626 default_sa.go:55] duration metric: took 198.280596ms for default service account to be created ...
	I0127 02:50:41.368782  939626 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 02:50:41.570032  939626 system_pods.go:87] 6 kube-system pods found
	I0127 02:50:41.768670  939626 system_pods.go:105] "coredns-668d6bf9bc-f22h8" [8dad4f16-9af8-4d93-9cf5-232c0a0935a0] Running
	I0127 02:50:41.768695  939626 system_pods.go:105] "etcd-pause-622238" [946278f4-86ff-43c3-b930-a9a0d1214046] Running
	I0127 02:50:41.768702  939626 system_pods.go:105] "kube-apiserver-pause-622238" [fc330ca3-25d8-4cb9-ae4e-35832b066331] Running
	I0127 02:50:41.768709  939626 system_pods.go:105] "kube-controller-manager-pause-622238" [f7e24926-6b27-4133-b1ea-967e10c0efab] Running
	I0127 02:50:41.768715  939626 system_pods.go:105] "kube-proxy-9pg5p" [b532db91-62b9-4bee-bbc9-1613f5989325] Running
	I0127 02:50:41.768722  939626 system_pods.go:105] "kube-scheduler-pause-622238" [a850b25b-fc22-4574-a131-70861f2c285a] Running
	I0127 02:50:41.768731  939626 system_pods.go:147] duration metric: took 399.941634ms to wait for k8s-apps to be running ...
	I0127 02:50:41.768740  939626 system_svc.go:44] waiting for kubelet service to be running ....
	I0127 02:50:41.768800  939626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 02:50:41.783092  939626 system_svc.go:56] duration metric: took 14.339392ms WaitForService to wait for kubelet
	I0127 02:50:41.783130  939626 kubeadm.go:582] duration metric: took 2.97248684s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 02:50:41.783157  939626 node_conditions.go:102] verifying NodePressure condition ...
	I0127 02:50:41.968915  939626 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 02:50:41.968960  939626 node_conditions.go:123] node cpu capacity is 2
	I0127 02:50:41.968973  939626 node_conditions.go:105] duration metric: took 185.810353ms to run NodePressure ...
	I0127 02:50:41.968985  939626 start.go:241] waiting for startup goroutines ...
	I0127 02:50:41.968992  939626 start.go:246] waiting for cluster config update ...
	I0127 02:50:41.968999  939626 start.go:255] writing updated cluster config ...
	I0127 02:50:41.969296  939626 ssh_runner.go:195] Run: rm -f paused
	I0127 02:50:42.019528  939626 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 02:50:42.021172  939626 out.go:177] * Done! kubectl is now configured to use "pause-622238" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 27 02:50:42 pause-622238 crio[2912]: time="2025-01-27 02:50:42.666699887Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737946242666677030,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0fb9770c-a0f7-42e1-963f-f877f202d02c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 02:50:42 pause-622238 crio[2912]: time="2025-01-27 02:50:42.667167786Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5f57e25f-9214-49b2-b0f0-bcfd06a6467d name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 02:50:42 pause-622238 crio[2912]: time="2025-01-27 02:50:42.667242438Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5f57e25f-9214-49b2-b0f0-bcfd06a6467d name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 02:50:42 pause-622238 crio[2912]: time="2025-01-27 02:50:42.667574859Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a8134cd7d02fa6732b6a82499a19cd8e722326cd3d251615c779006744788733,PodSandboxId:1ffeec06b0153cfa9999e512843a074b2d6534e439f7a45479cec9f5414895ac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737946226431726734,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9pg5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b532db91-62b9-4bee-bbc9-1613f5989325,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21c957636d37d8c0760bf8d712101ad72bf0a6bda19423de8053e78d2285f11d,PodSandboxId:d2fb98eb658ae19ffbd127d5cb1e6319aca30b95a485a3a4e3a3919af9061dc9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737946226406205568,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-f22h8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dad4f16-9af8-4d93-9cf5-232c0a0935a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3ce858dc5535eb7b195ff30036e1f1f5b6fdf5698b50df27ef513cc82e34256,PodSandboxId:755a82eacc889b0e328e7a75614c5e4c3f1d127d4796daaa85f72b8bdb41b520,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737946221671634862,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: d437827597bc003de56f7a6187cae3a5,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:437dd26112fe8d6fefdc85356c026c96984ad2fff136a2a4327d658f9442e464,PodSandboxId:b6b7226c8ff780b6cdafaf2d15c402103b5f97f4a48acec815817351f761c3f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737946221681890230,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7
36acc6ecf8b13aad639a8cb1f8bd63,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c45f3cea74699716fd13b9d40fc91d0b27d5e1cc5cfcca8cc9e1c5dac734fd53,PodSandboxId:27d12b4cc7ec869ed20a79b3f74adea03efa326fdf53913d7c405d65e82a11cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737946221558217377,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c55da833af33a845ed3f
4bcc5624da47,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c5b6075275813cb01bde4bf7366b0d6ffa9336e295c09121b9f3cd66671721,PodSandboxId:a35453f75adbb90b0a9fbc30cfe45a92c23569aeaef3de45b9519afe4a6324e4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737946221563863068,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06d52062ced041c1b7c3c9bf4475a5b1,},Annotations:map[string]string{io.
kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:825a3f8f6a885295c9c9e9439f12deaad49e21c747b3809d5cff027c4b9e1819,PodSandboxId:490f4f6c968c5cd52220544ac5fac788cea22eff14c7f0b3839456731e778e28,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1737946215566892997,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9pg5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b532db91-62b9-4bee-bbc9-1613f5989325,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b29
3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:723b4e88e62109878bd5c6cf65e85c07583b1be9a253ed0436768f103dcf7b56,PodSandboxId:bf5215f0aedd701090cfe09782f59164141f7067082adcb076c382b265f92c2a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1737946215632820908,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d437827597bc003de56f7a6187cae3a5,},Annotations:map[string]string{io.kubernetes.container.hash:
16baf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcab89d41125a8414b58de1d77550e1159a49e4d71ff4369d142f7e3501b9229,PodSandboxId:b2b582d5d4f3a5f7d33c84868be88ac6d9fa7c92542626a39a4b583d13f3f2d1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1737946215628464124,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c55da833af33a845ed3f4bcc5624da47,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d62e04f6f5e13451192cf7b3275a99d64a2c9696cf576a5ee12858cfa3dc94c3,PodSandboxId:1f1e9f60ffcf2e9b104554631fc4da65c2fc13ca9d6ee88ea4ccb9cbfa699758,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737946215496141284,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e736acc6ecf8b13aad639a8cb1f8bd63,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.resta
rtCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90e530f8899a063641c0232d3a8d81ba60eab63da53061aa8f6e54d35a989686,PodSandboxId:8ec628c2a5c8bed1a6f7ffd9286f75e493f5ccb71814e9a4ee6da4f782d8300f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1737946215400445438,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06d52062ced041c1b7c3c9bf4475a5b1,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe790ab890a4916fdd3f58edb0503dd303a6e1cf28eb9c753b63fc4ecbb7ddeb,PodSandboxId:ab2780aabca04b216184097a282c72ef64c686ec523d744186d4ac3c5b69e493,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1737946151284840755,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-f22h8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dad4f16-9af8-4d93-9cf5-232c0a0935a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5f57e25f-9214-49b2-b0f0-bcfd06a6467d name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 02:50:42 pause-622238 crio[2912]: time="2025-01-27 02:50:42.707732691Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e6049048-c50b-42c3-ac85-d376c3782052 name=/runtime.v1.RuntimeService/Version
	Jan 27 02:50:42 pause-622238 crio[2912]: time="2025-01-27 02:50:42.707807122Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e6049048-c50b-42c3-ac85-d376c3782052 name=/runtime.v1.RuntimeService/Version
	Jan 27 02:50:42 pause-622238 crio[2912]: time="2025-01-27 02:50:42.708873423Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b626655b-1d8e-4af6-a295-cbf021aa282c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 02:50:42 pause-622238 crio[2912]: time="2025-01-27 02:50:42.709252004Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737946242709230155,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b626655b-1d8e-4af6-a295-cbf021aa282c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 02:50:42 pause-622238 crio[2912]: time="2025-01-27 02:50:42.709828343Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8084c6cd-4e81-4516-a53f-d0eb85c7d671 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 02:50:42 pause-622238 crio[2912]: time="2025-01-27 02:50:42.709887110Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8084c6cd-4e81-4516-a53f-d0eb85c7d671 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 02:50:42 pause-622238 crio[2912]: time="2025-01-27 02:50:42.710162648Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a8134cd7d02fa6732b6a82499a19cd8e722326cd3d251615c779006744788733,PodSandboxId:1ffeec06b0153cfa9999e512843a074b2d6534e439f7a45479cec9f5414895ac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737946226431726734,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9pg5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b532db91-62b9-4bee-bbc9-1613f5989325,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21c957636d37d8c0760bf8d712101ad72bf0a6bda19423de8053e78d2285f11d,PodSandboxId:d2fb98eb658ae19ffbd127d5cb1e6319aca30b95a485a3a4e3a3919af9061dc9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737946226406205568,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-f22h8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dad4f16-9af8-4d93-9cf5-232c0a0935a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3ce858dc5535eb7b195ff30036e1f1f5b6fdf5698b50df27ef513cc82e34256,PodSandboxId:755a82eacc889b0e328e7a75614c5e4c3f1d127d4796daaa85f72b8bdb41b520,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737946221671634862,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: d437827597bc003de56f7a6187cae3a5,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:437dd26112fe8d6fefdc85356c026c96984ad2fff136a2a4327d658f9442e464,PodSandboxId:b6b7226c8ff780b6cdafaf2d15c402103b5f97f4a48acec815817351f761c3f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737946221681890230,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7
36acc6ecf8b13aad639a8cb1f8bd63,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c45f3cea74699716fd13b9d40fc91d0b27d5e1cc5cfcca8cc9e1c5dac734fd53,PodSandboxId:27d12b4cc7ec869ed20a79b3f74adea03efa326fdf53913d7c405d65e82a11cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737946221558217377,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c55da833af33a845ed3f
4bcc5624da47,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c5b6075275813cb01bde4bf7366b0d6ffa9336e295c09121b9f3cd66671721,PodSandboxId:a35453f75adbb90b0a9fbc30cfe45a92c23569aeaef3de45b9519afe4a6324e4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737946221563863068,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06d52062ced041c1b7c3c9bf4475a5b1,},Annotations:map[string]string{io.
kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:825a3f8f6a885295c9c9e9439f12deaad49e21c747b3809d5cff027c4b9e1819,PodSandboxId:490f4f6c968c5cd52220544ac5fac788cea22eff14c7f0b3839456731e778e28,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1737946215566892997,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9pg5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b532db91-62b9-4bee-bbc9-1613f5989325,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b29
3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:723b4e88e62109878bd5c6cf65e85c07583b1be9a253ed0436768f103dcf7b56,PodSandboxId:bf5215f0aedd701090cfe09782f59164141f7067082adcb076c382b265f92c2a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1737946215632820908,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d437827597bc003de56f7a6187cae3a5,},Annotations:map[string]string{io.kubernetes.container.hash:
16baf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcab89d41125a8414b58de1d77550e1159a49e4d71ff4369d142f7e3501b9229,PodSandboxId:b2b582d5d4f3a5f7d33c84868be88ac6d9fa7c92542626a39a4b583d13f3f2d1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1737946215628464124,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c55da833af33a845ed3f4bcc5624da47,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d62e04f6f5e13451192cf7b3275a99d64a2c9696cf576a5ee12858cfa3dc94c3,PodSandboxId:1f1e9f60ffcf2e9b104554631fc4da65c2fc13ca9d6ee88ea4ccb9cbfa699758,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737946215496141284,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e736acc6ecf8b13aad639a8cb1f8bd63,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.resta
rtCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90e530f8899a063641c0232d3a8d81ba60eab63da53061aa8f6e54d35a989686,PodSandboxId:8ec628c2a5c8bed1a6f7ffd9286f75e493f5ccb71814e9a4ee6da4f782d8300f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1737946215400445438,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06d52062ced041c1b7c3c9bf4475a5b1,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe790ab890a4916fdd3f58edb0503dd303a6e1cf28eb9c753b63fc4ecbb7ddeb,PodSandboxId:ab2780aabca04b216184097a282c72ef64c686ec523d744186d4ac3c5b69e493,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1737946151284840755,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-f22h8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dad4f16-9af8-4d93-9cf5-232c0a0935a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8084c6cd-4e81-4516-a53f-d0eb85c7d671 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 02:50:42 pause-622238 crio[2912]: time="2025-01-27 02:50:42.750644527Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b8f587dd-054d-40b2-a664-6dc18f4556eb name=/runtime.v1.RuntimeService/Version
	Jan 27 02:50:42 pause-622238 crio[2912]: time="2025-01-27 02:50:42.750717386Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b8f587dd-054d-40b2-a664-6dc18f4556eb name=/runtime.v1.RuntimeService/Version
	Jan 27 02:50:42 pause-622238 crio[2912]: time="2025-01-27 02:50:42.752046924Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8fe73153-7bc4-44fe-a382-2b748bea1eea name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 02:50:42 pause-622238 crio[2912]: time="2025-01-27 02:50:42.752503536Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737946242752479525,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8fe73153-7bc4-44fe-a382-2b748bea1eea name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 02:50:42 pause-622238 crio[2912]: time="2025-01-27 02:50:42.753046265Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eb9f6246-3636-42f8-848b-b982cd07cfe7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 02:50:42 pause-622238 crio[2912]: time="2025-01-27 02:50:42.753109103Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eb9f6246-3636-42f8-848b-b982cd07cfe7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 02:50:42 pause-622238 crio[2912]: time="2025-01-27 02:50:42.753520669Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a8134cd7d02fa6732b6a82499a19cd8e722326cd3d251615c779006744788733,PodSandboxId:1ffeec06b0153cfa9999e512843a074b2d6534e439f7a45479cec9f5414895ac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737946226431726734,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9pg5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b532db91-62b9-4bee-bbc9-1613f5989325,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21c957636d37d8c0760bf8d712101ad72bf0a6bda19423de8053e78d2285f11d,PodSandboxId:d2fb98eb658ae19ffbd127d5cb1e6319aca30b95a485a3a4e3a3919af9061dc9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737946226406205568,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-f22h8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dad4f16-9af8-4d93-9cf5-232c0a0935a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3ce858dc5535eb7b195ff30036e1f1f5b6fdf5698b50df27ef513cc82e34256,PodSandboxId:755a82eacc889b0e328e7a75614c5e4c3f1d127d4796daaa85f72b8bdb41b520,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737946221671634862,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: d437827597bc003de56f7a6187cae3a5,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:437dd26112fe8d6fefdc85356c026c96984ad2fff136a2a4327d658f9442e464,PodSandboxId:b6b7226c8ff780b6cdafaf2d15c402103b5f97f4a48acec815817351f761c3f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737946221681890230,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7
36acc6ecf8b13aad639a8cb1f8bd63,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c45f3cea74699716fd13b9d40fc91d0b27d5e1cc5cfcca8cc9e1c5dac734fd53,PodSandboxId:27d12b4cc7ec869ed20a79b3f74adea03efa326fdf53913d7c405d65e82a11cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737946221558217377,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c55da833af33a845ed3f
4bcc5624da47,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c5b6075275813cb01bde4bf7366b0d6ffa9336e295c09121b9f3cd66671721,PodSandboxId:a35453f75adbb90b0a9fbc30cfe45a92c23569aeaef3de45b9519afe4a6324e4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737946221563863068,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06d52062ced041c1b7c3c9bf4475a5b1,},Annotations:map[string]string{io.
kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:825a3f8f6a885295c9c9e9439f12deaad49e21c747b3809d5cff027c4b9e1819,PodSandboxId:490f4f6c968c5cd52220544ac5fac788cea22eff14c7f0b3839456731e778e28,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1737946215566892997,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9pg5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b532db91-62b9-4bee-bbc9-1613f5989325,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b29
3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:723b4e88e62109878bd5c6cf65e85c07583b1be9a253ed0436768f103dcf7b56,PodSandboxId:bf5215f0aedd701090cfe09782f59164141f7067082adcb076c382b265f92c2a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1737946215632820908,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d437827597bc003de56f7a6187cae3a5,},Annotations:map[string]string{io.kubernetes.container.hash:
16baf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcab89d41125a8414b58de1d77550e1159a49e4d71ff4369d142f7e3501b9229,PodSandboxId:b2b582d5d4f3a5f7d33c84868be88ac6d9fa7c92542626a39a4b583d13f3f2d1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1737946215628464124,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c55da833af33a845ed3f4bcc5624da47,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d62e04f6f5e13451192cf7b3275a99d64a2c9696cf576a5ee12858cfa3dc94c3,PodSandboxId:1f1e9f60ffcf2e9b104554631fc4da65c2fc13ca9d6ee88ea4ccb9cbfa699758,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737946215496141284,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e736acc6ecf8b13aad639a8cb1f8bd63,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.resta
rtCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90e530f8899a063641c0232d3a8d81ba60eab63da53061aa8f6e54d35a989686,PodSandboxId:8ec628c2a5c8bed1a6f7ffd9286f75e493f5ccb71814e9a4ee6da4f782d8300f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1737946215400445438,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06d52062ced041c1b7c3c9bf4475a5b1,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe790ab890a4916fdd3f58edb0503dd303a6e1cf28eb9c753b63fc4ecbb7ddeb,PodSandboxId:ab2780aabca04b216184097a282c72ef64c686ec523d744186d4ac3c5b69e493,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1737946151284840755,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-f22h8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dad4f16-9af8-4d93-9cf5-232c0a0935a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eb9f6246-3636-42f8-848b-b982cd07cfe7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 02:50:42 pause-622238 crio[2912]: time="2025-01-27 02:50:42.796987611Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=539fd385-d48d-4ff1-92f8-4aa63134389a name=/runtime.v1.RuntimeService/Version
	Jan 27 02:50:42 pause-622238 crio[2912]: time="2025-01-27 02:50:42.797061469Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=539fd385-d48d-4ff1-92f8-4aa63134389a name=/runtime.v1.RuntimeService/Version
	Jan 27 02:50:42 pause-622238 crio[2912]: time="2025-01-27 02:50:42.798081283Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8c241b32-d59a-4453-839d-2df0037fb046 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 02:50:42 pause-622238 crio[2912]: time="2025-01-27 02:50:42.798560475Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737946242798532913,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8c241b32-d59a-4453-839d-2df0037fb046 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 02:50:42 pause-622238 crio[2912]: time="2025-01-27 02:50:42.799135905Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f9b3a183-0aa8-4853-8cce-c5e3c03a7330 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 02:50:42 pause-622238 crio[2912]: time="2025-01-27 02:50:42.799200796Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f9b3a183-0aa8-4853-8cce-c5e3c03a7330 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 02:50:42 pause-622238 crio[2912]: time="2025-01-27 02:50:42.799515500Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a8134cd7d02fa6732b6a82499a19cd8e722326cd3d251615c779006744788733,PodSandboxId:1ffeec06b0153cfa9999e512843a074b2d6534e439f7a45479cec9f5414895ac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737946226431726734,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9pg5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b532db91-62b9-4bee-bbc9-1613f5989325,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21c957636d37d8c0760bf8d712101ad72bf0a6bda19423de8053e78d2285f11d,PodSandboxId:d2fb98eb658ae19ffbd127d5cb1e6319aca30b95a485a3a4e3a3919af9061dc9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737946226406205568,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-f22h8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dad4f16-9af8-4d93-9cf5-232c0a0935a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3ce858dc5535eb7b195ff30036e1f1f5b6fdf5698b50df27ef513cc82e34256,PodSandboxId:755a82eacc889b0e328e7a75614c5e4c3f1d127d4796daaa85f72b8bdb41b520,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737946221671634862,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: d437827597bc003de56f7a6187cae3a5,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:437dd26112fe8d6fefdc85356c026c96984ad2fff136a2a4327d658f9442e464,PodSandboxId:b6b7226c8ff780b6cdafaf2d15c402103b5f97f4a48acec815817351f761c3f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737946221681890230,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7
36acc6ecf8b13aad639a8cb1f8bd63,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c45f3cea74699716fd13b9d40fc91d0b27d5e1cc5cfcca8cc9e1c5dac734fd53,PodSandboxId:27d12b4cc7ec869ed20a79b3f74adea03efa326fdf53913d7c405d65e82a11cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737946221558217377,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c55da833af33a845ed3f
4bcc5624da47,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c5b6075275813cb01bde4bf7366b0d6ffa9336e295c09121b9f3cd66671721,PodSandboxId:a35453f75adbb90b0a9fbc30cfe45a92c23569aeaef3de45b9519afe4a6324e4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737946221563863068,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06d52062ced041c1b7c3c9bf4475a5b1,},Annotations:map[string]string{io.
kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:825a3f8f6a885295c9c9e9439f12deaad49e21c747b3809d5cff027c4b9e1819,PodSandboxId:490f4f6c968c5cd52220544ac5fac788cea22eff14c7f0b3839456731e778e28,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1737946215566892997,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9pg5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b532db91-62b9-4bee-bbc9-1613f5989325,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b29
3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:723b4e88e62109878bd5c6cf65e85c07583b1be9a253ed0436768f103dcf7b56,PodSandboxId:bf5215f0aedd701090cfe09782f59164141f7067082adcb076c382b265f92c2a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1737946215632820908,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d437827597bc003de56f7a6187cae3a5,},Annotations:map[string]string{io.kubernetes.container.hash:
16baf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcab89d41125a8414b58de1d77550e1159a49e4d71ff4369d142f7e3501b9229,PodSandboxId:b2b582d5d4f3a5f7d33c84868be88ac6d9fa7c92542626a39a4b583d13f3f2d1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1737946215628464124,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c55da833af33a845ed3f4bcc5624da47,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d62e04f6f5e13451192cf7b3275a99d64a2c9696cf576a5ee12858cfa3dc94c3,PodSandboxId:1f1e9f60ffcf2e9b104554631fc4da65c2fc13ca9d6ee88ea4ccb9cbfa699758,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737946215496141284,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e736acc6ecf8b13aad639a8cb1f8bd63,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.resta
rtCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90e530f8899a063641c0232d3a8d81ba60eab63da53061aa8f6e54d35a989686,PodSandboxId:8ec628c2a5c8bed1a6f7ffd9286f75e493f5ccb71814e9a4ee6da4f782d8300f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1737946215400445438,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06d52062ced041c1b7c3c9bf4475a5b1,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe790ab890a4916fdd3f58edb0503dd303a6e1cf28eb9c753b63fc4ecbb7ddeb,PodSandboxId:ab2780aabca04b216184097a282c72ef64c686ec523d744186d4ac3c5b69e493,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1737946151284840755,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-f22h8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dad4f16-9af8-4d93-9cf5-232c0a0935a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f9b3a183-0aa8-4853-8cce-c5e3c03a7330 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	a8134cd7d02fa       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a   16 seconds ago       Running             kube-proxy                2                   1ffeec06b0153       kube-proxy-9pg5p
	21c957636d37d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 seconds ago       Running             coredns                   1                   d2fb98eb658ae       coredns-668d6bf9bc-f22h8
	437dd26112fe8       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a   21 seconds ago       Running             kube-apiserver            2                   b6b7226c8ff78       kube-apiserver-pause-622238
	c3ce858dc5535       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35   21 seconds ago       Running             kube-controller-manager   2                   755a82eacc889       kube-controller-manager-pause-622238
	48c5b60752758       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   21 seconds ago       Running             etcd                      2                   a35453f75adbb       etcd-pause-622238
	c45f3cea74699       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1   21 seconds ago       Running             kube-scheduler            2                   27d12b4cc7ec8       kube-scheduler-pause-622238
	723b4e88e6210       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35   27 seconds ago       Exited              kube-controller-manager   1                   bf5215f0aedd7       kube-controller-manager-pause-622238
	bcab89d41125a       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1   27 seconds ago       Exited              kube-scheduler            1                   b2b582d5d4f3a       kube-scheduler-pause-622238
	825a3f8f6a885       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a   27 seconds ago       Exited              kube-proxy                1                   490f4f6c968c5       kube-proxy-9pg5p
	d62e04f6f5e13       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a   27 seconds ago       Exited              kube-apiserver            1                   1f1e9f60ffcf2       kube-apiserver-pause-622238
	90e530f8899a0       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   27 seconds ago       Exited              etcd                      1                   8ec628c2a5c8b       etcd-pause-622238
	fe790ab890a49       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   About a minute ago   Exited              coredns                   0                   ab2780aabca04       coredns-668d6bf9bc-f22h8
	
	
	==> coredns [21c957636d37d8c0760bf8d712101ad72bf0a6bda19423de8053e78d2285f11d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:52311 - 830 "HINFO IN 3172464895456124788.1314168427043943882. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015358136s
	
	
	==> coredns [fe790ab890a4916fdd3f58edb0503dd303a6e1cf28eb9c753b63fc4ecbb7ddeb] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/kubernetes: Trace[313740481]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Jan-2025 02:49:11.503) (total time: 28922ms):
	Trace[313740481]: ---"Objects listed" error:<nil> 28922ms (02:49:40.426)
	Trace[313740481]: [28.922262662s] [28.922262662s] END
	[INFO] plugin/kubernetes: Trace[1652299318]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Jan-2025 02:49:11.504) (total time: 28921ms):
	Trace[1652299318]: ---"Objects listed" error:<nil> 28921ms (02:49:40.426)
	Trace[1652299318]: [28.921831874s] [28.921831874s] END
	[INFO] plugin/kubernetes: Trace[1508934810]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Jan-2025 02:49:11.503) (total time: 28923ms):
	Trace[1508934810]: ---"Objects listed" error:<nil> 28923ms (02:49:40.426)
	Trace[1508934810]: [28.92363372s] [28.92363372s] END
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	[INFO] Reloading complete
	[INFO] 127.0.0.1:55012 - 15646 "HINFO IN 741992024223013000.3069113336796665424. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.152961392s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-622238
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-622238
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6bb462d349d93b9bf1c5a4f87817e5e9ea11cc95
	                    minikube.k8s.io/name=pause-622238
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T02_49_06_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 02:49:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-622238
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 02:50:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 02:50:25 +0000   Mon, 27 Jan 2025 02:49:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 02:50:25 +0000   Mon, 27 Jan 2025 02:49:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 02:50:25 +0000   Mon, 27 Jan 2025 02:49:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 02:50:25 +0000   Mon, 27 Jan 2025 02:49:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.58
	  Hostname:    pause-622238
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 e3bf4199713e4d72a71493668e8f4425
	  System UUID:                e3bf4199-713e-4d72-a714-93668e8f4425
	  Boot ID:                    f75fc88e-62c3-4db5-aba9-d74454c0bb37
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-f22h8                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     93s
	  kube-system                 etcd-pause-622238                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         98s
	  kube-system                 kube-apiserver-pause-622238             250m (12%)    0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 kube-controller-manager-pause-622238    200m (10%)    0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 kube-proxy-9pg5p                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 kube-scheduler-pause-622238             100m (5%)     0 (0%)      0 (0%)           0 (0%)         98s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 91s                kube-proxy       
	  Normal  Starting                 16s                kube-proxy       
	  Normal  NodeHasSufficientPID     98s                kubelet          Node pause-622238 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  98s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  98s                kubelet          Node pause-622238 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    98s                kubelet          Node pause-622238 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 98s                kubelet          Starting kubelet.
	  Normal  NodeReady                97s                kubelet          Node pause-622238 status is now: NodeReady
	  Normal  RegisteredNode           94s                node-controller  Node pause-622238 event: Registered Node pause-622238 in Controller
	  Normal  Starting                 22s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node pause-622238 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node pause-622238 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node pause-622238 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15s                node-controller  Node pause-622238 event: Registered Node pause-622238 in Controller
	
	
	==> dmesg <==
	[  +9.295555] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.060036] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057599] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.192391] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.128417] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.286705] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +4.162484] systemd-fstab-generator[752]: Ignoring "noauto" option for root device
	[  +4.389723] systemd-fstab-generator[887]: Ignoring "noauto" option for root device
	[  +0.061939] kauditd_printk_skb: 158 callbacks suppressed
	[Jan27 02:49] systemd-fstab-generator[1245]: Ignoring "noauto" option for root device
	[  +0.078344] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.851540] systemd-fstab-generator[1376]: Ignoring "noauto" option for root device
	[  +0.494494] kauditd_printk_skb: 43 callbacks suppressed
	[ +11.333731] kauditd_printk_skb: 66 callbacks suppressed
	[Jan27 02:50] systemd-fstab-generator[2329]: Ignoring "noauto" option for root device
	[  +0.166853] systemd-fstab-generator[2341]: Ignoring "noauto" option for root device
	[  +0.205251] systemd-fstab-generator[2355]: Ignoring "noauto" option for root device
	[  +0.307503] systemd-fstab-generator[2463]: Ignoring "noauto" option for root device
	[  +1.156818] systemd-fstab-generator[2877]: Ignoring "noauto" option for root device
	[  +1.869371] systemd-fstab-generator[3492]: Ignoring "noauto" option for root device
	[  +2.599346] systemd-fstab-generator[3616]: Ignoring "noauto" option for root device
	[  +0.076620] kauditd_printk_skb: 238 callbacks suppressed
	[  +5.605690] kauditd_printk_skb: 38 callbacks suppressed
	[  +6.736292] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.592729] systemd-fstab-generator[4082]: Ignoring "noauto" option for root device
	
	
	==> etcd [48c5b6075275813cb01bde4bf7366b0d6ffa9336e295c09121b9f3cd66671721] <==
	{"level":"warn","ts":"2025-01-27T02:50:28.809183Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"166.718646ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" limit:1 ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2025-01-27T02:50:28.809434Z","caller":"traceutil/trace.go:171","msg":"trace[1467415693] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:419; }","duration":"167.054472ms","start":"2025-01-27T02:50:28.642365Z","end":"2025-01-27T02:50:28.809420Z","steps":["trace[1467415693] 'agreement among raft nodes before linearized reading'  (duration: 166.705349ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T02:50:28.810123Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.140916ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/legacy-service-account-token-cleaner\" limit:1 ","response":"range_response_count:1 size:238"}
	{"level":"info","ts":"2025-01-27T02:50:28.812043Z","caller":"traceutil/trace.go:171","msg":"trace[1746586585] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/legacy-service-account-token-cleaner; range_end:; response_count:1; response_revision:420; }","duration":"121.090381ms","start":"2025-01-27T02:50:28.690937Z","end":"2025-01-27T02:50:28.812027Z","steps":["trace[1746586585] 'agreement among raft nodes before linearized reading'  (duration: 119.087077ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T02:50:28.812676Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.918091ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-668d6bf9bc-f22h8\" limit:1 ","response":"range_response_count:1 size:5147"}
	{"level":"info","ts":"2025-01-27T02:50:28.812956Z","caller":"traceutil/trace.go:171","msg":"trace[27740484] range","detail":"{range_begin:/registry/pods/kube-system/coredns-668d6bf9bc-f22h8; range_end:; response_count:1; response_revision:420; }","duration":"108.21522ms","start":"2025-01-27T02:50:28.704720Z","end":"2025-01-27T02:50:28.812935Z","steps":["trace[27740484] 'agreement among raft nodes before linearized reading'  (duration: 107.903018ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T02:50:28.977478Z","caller":"traceutil/trace.go:171","msg":"trace[512016606] transaction","detail":"{read_only:false; response_revision:421; number_of_response:1; }","duration":"133.065152ms","start":"2025-01-27T02:50:28.842201Z","end":"2025-01-27T02:50:28.975266Z","steps":["trace[512016606] 'process raft request'  (duration: 130.011944ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T02:50:29.330992Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.637771ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14849094360650042599 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/pause-622238.181e6ce150a2410a\" mod_revision:441 > success:<request_put:<key:\"/registry/events/default/pause-622238.181e6ce150a2410a\" value_size:594 lease:5625722323795266741 >> failure:<request_range:<key:\"/registry/events/default/pause-622238.181e6ce150a2410a\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-01-27T02:50:29.331330Z","caller":"traceutil/trace.go:171","msg":"trace[1979705480] linearizableReadLoop","detail":"{readStateIndex:489; appliedIndex:488; }","duration":"126.025395ms","start":"2025-01-27T02:50:29.205208Z","end":"2025-01-27T02:50:29.331234Z","steps":["trace[1979705480] 'read index received'  (duration: 465.1µs)","trace[1979705480] 'applied index is now lower than readState.Index'  (duration: 125.559059ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T02:50:29.331569Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.351274ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-668d6bf9bc-f22h8\" limit:1 ","response":"range_response_count:1 size:5147"}
	{"level":"info","ts":"2025-01-27T02:50:29.331639Z","caller":"traceutil/trace.go:171","msg":"trace[1592011581] range","detail":"{range_begin:/registry/pods/kube-system/coredns-668d6bf9bc-f22h8; range_end:; response_count:1; response_revision:444; }","duration":"126.460482ms","start":"2025-01-27T02:50:29.205170Z","end":"2025-01-27T02:50:29.331631Z","steps":["trace[1592011581] 'agreement among raft nodes before linearized reading'  (duration: 126.271874ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T02:50:29.331823Z","caller":"traceutil/trace.go:171","msg":"trace[869005392] transaction","detail":"{read_only:false; response_revision:444; number_of_response:1; }","duration":"140.369071ms","start":"2025-01-27T02:50:29.191445Z","end":"2025-01-27T02:50:29.331814Z","steps":["trace[869005392] 'process raft request'  (duration: 14.330634ms)","trace[869005392] 'compare'  (duration: 124.471718ms)"],"step_count":2}
	{"level":"info","ts":"2025-01-27T02:50:32.621147Z","caller":"traceutil/trace.go:171","msg":"trace[145569241] transaction","detail":"{read_only:false; response_revision:469; number_of_response:1; }","duration":"279.979989ms","start":"2025-01-27T02:50:32.341134Z","end":"2025-01-27T02:50:32.621114Z","steps":["trace[145569241] 'process raft request'  (duration: 279.734042ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T02:50:33.238354Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"479.820404ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14849094360650042646 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-dns\" mod_revision:424 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-dns\" value_size:728 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/kube-dns\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-01-27T02:50:33.238526Z","caller":"traceutil/trace.go:171","msg":"trace[63584628] transaction","detail":"{read_only:false; response_revision:470; number_of_response:1; }","duration":"602.702334ms","start":"2025-01-27T02:50:32.635800Z","end":"2025-01-27T02:50:33.238502Z","steps":["trace[63584628] 'process raft request'  (duration: 122.592598ms)","trace[63584628] 'compare'  (duration: 479.686374ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T02:50:33.238615Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T02:50:32.635772Z","time spent":"602.805823ms","remote":"127.0.0.1:34082","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":785,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-dns\" mod_revision:424 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-dns\" value_size:728 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/kube-dns\" > >"}
	{"level":"info","ts":"2025-01-27T02:50:33.239407Z","caller":"traceutil/trace.go:171","msg":"trace[1971086333] linearizableReadLoop","detail":"{readStateIndex:517; appliedIndex:514; }","duration":"534.882454ms","start":"2025-01-27T02:50:32.704514Z","end":"2025-01-27T02:50:33.239396Z","steps":["trace[1971086333] 'read index received'  (duration: 53.889529ms)","trace[1971086333] 'applied index is now lower than readState.Index'  (duration: 480.992291ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T02:50:33.239579Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"535.055214ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-668d6bf9bc-f22h8\" limit:1 ","response":"range_response_count:1 size:4969"}
	{"level":"info","ts":"2025-01-27T02:50:33.239902Z","caller":"traceutil/trace.go:171","msg":"trace[1088969542] range","detail":"{range_begin:/registry/pods/kube-system/coredns-668d6bf9bc-f22h8; range_end:; response_count:1; response_revision:472; }","duration":"535.399465ms","start":"2025-01-27T02:50:32.704488Z","end":"2025-01-27T02:50:33.239888Z","steps":["trace[1088969542] 'agreement among raft nodes before linearized reading'  (duration: 535.034544ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T02:50:33.239965Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T02:50:32.704474Z","time spent":"535.478745ms","remote":"127.0.0.1:34094","response type":"/etcdserverpb.KV/Range","request count":0,"request size":55,"response count":1,"response size":4992,"request content":"key:\"/registry/pods/kube-system/coredns-668d6bf9bc-f22h8\" limit:1 "}
	{"level":"info","ts":"2025-01-27T02:50:33.240438Z","caller":"traceutil/trace.go:171","msg":"trace[1606065146] transaction","detail":"{read_only:false; response_revision:472; number_of_response:1; }","duration":"603.305307ms","start":"2025-01-27T02:50:32.637123Z","end":"2025-01-27T02:50:33.240429Z","steps":["trace[1606065146] 'process raft request'  (duration: 602.230759ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T02:50:33.240695Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T02:50:32.637105Z","time spent":"603.53587ms","remote":"127.0.0.1:34408","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3830,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-668d6bf9bc\" mod_revision:422 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-668d6bf9bc\" value_size:3770 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/coredns-668d6bf9bc\" > >"}
	{"level":"info","ts":"2025-01-27T02:50:33.240471Z","caller":"traceutil/trace.go:171","msg":"trace[545986295] transaction","detail":"{read_only:false; response_revision:471; number_of_response:1; }","duration":"604.376468ms","start":"2025-01-27T02:50:32.636089Z","end":"2025-01-27T02:50:33.240465Z","steps":["trace[545986295] 'process raft request'  (duration: 603.132964ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T02:50:33.240921Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T02:50:32.636071Z","time spent":"604.811626ms","remote":"127.0.0.1:34176","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1252,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/endpointslices/kube-system/kube-dns-6lmkt\" mod_revision:427 > success:<request_put:<key:\"/registry/endpointslices/kube-system/kube-dns-6lmkt\" value_size:1193 >> failure:<request_range:<key:\"/registry/endpointslices/kube-system/kube-dns-6lmkt\" > >"}
	{"level":"info","ts":"2025-01-27T02:50:33.487249Z","caller":"traceutil/trace.go:171","msg":"trace[1516247128] transaction","detail":"{read_only:false; response_revision:473; number_of_response:1; }","duration":"227.109598ms","start":"2025-01-27T02:50:33.260110Z","end":"2025-01-27T02:50:33.487220Z","steps":["trace[1516247128] 'process raft request'  (duration: 218.679433ms)"],"step_count":1}
	
	
	==> etcd [90e530f8899a063641c0232d3a8d81ba60eab63da53061aa8f6e54d35a989686] <==
	{"level":"info","ts":"2025-01-27T02:50:15.999913Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2025-01-27T02:50:16.023509Z","caller":"etcdserver/raft.go:540","msg":"restarting local member","cluster-id":"94a432db9bee1c6","local-member-id":"b223154dc276ce12","commit-index":425}
	{"level":"info","ts":"2025-01-27T02:50:16.023783Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b223154dc276ce12 switched to configuration voters=()"}
	{"level":"info","ts":"2025-01-27T02:50:16.023855Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b223154dc276ce12 became follower at term 2"}
	{"level":"info","ts":"2025-01-27T02:50:16.023876Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft b223154dc276ce12 [peers: [], term: 2, commit: 425, applied: 0, lastindex: 425, lastterm: 2]"}
	{"level":"warn","ts":"2025-01-27T02:50:16.039459Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2025-01-27T02:50:16.053527Z","caller":"mvcc/kvstore.go:423","msg":"kvstore restored","current-rev":402}
	{"level":"info","ts":"2025-01-27T02:50:16.072786Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2025-01-27T02:50:16.075081Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"b223154dc276ce12","timeout":"7s"}
	{"level":"info","ts":"2025-01-27T02:50:16.075387Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"b223154dc276ce12"}
	{"level":"info","ts":"2025-01-27T02:50:16.075429Z","caller":"etcdserver/server.go:873","msg":"starting etcd server","local-member-id":"b223154dc276ce12","local-server-version":"3.5.16","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2025-01-27T02:50:16.075711Z","caller":"etcdserver/server.go:773","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-01-27T02:50:16.075897Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-01-27T02:50:16.075921Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-01-27T02:50:16.075927Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-01-27T02:50:16.076217Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b223154dc276ce12 switched to configuration voters=(12836126786655276562)"}
	{"level":"info","ts":"2025-01-27T02:50:16.076264Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"94a432db9bee1c6","local-member-id":"b223154dc276ce12","added-peer-id":"b223154dc276ce12","added-peer-peer-urls":["https://192.168.50.58:2380"]}
	{"level":"info","ts":"2025-01-27T02:50:16.076417Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"94a432db9bee1c6","local-member-id":"b223154dc276ce12","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T02:50:16.076449Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T02:50:16.080629Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T02:50:16.087556Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-01-27T02:50:16.087835Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.50.58:2380"}
	{"level":"info","ts":"2025-01-27T02:50:16.087846Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.50.58:2380"}
	{"level":"info","ts":"2025-01-27T02:50:16.089017Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-01-27T02:50:16.088961Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"b223154dc276ce12","initial-advertise-peer-urls":["https://192.168.50.58:2380"],"listen-peer-urls":["https://192.168.50.58:2380"],"advertise-client-urls":["https://192.168.50.58:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.58:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	
	
	==> kernel <==
	 02:50:43 up 2 min,  0 users,  load average: 0.68, 0.30, 0.11
	Linux pause-622238 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [437dd26112fe8d6fefdc85356c026c96984ad2fff136a2a4327d658f9442e464] <==
	I0127 02:50:25.415084       1 policy_source.go:240] refreshing policies
	I0127 02:50:25.416071       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0127 02:50:25.416261       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0127 02:50:25.416629       1 shared_informer.go:320] Caches are synced for configmaps
	I0127 02:50:25.416699       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0127 02:50:25.418526       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0127 02:50:25.428557       1 aggregator.go:171] initial CRD sync complete...
	I0127 02:50:25.428683       1 autoregister_controller.go:144] Starting autoregister controller
	I0127 02:50:25.428716       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0127 02:50:25.428739       1 cache.go:39] Caches are synced for autoregister controller
	I0127 02:50:25.432022       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0127 02:50:25.432104       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0127 02:50:25.436631       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0127 02:50:25.446268       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0127 02:50:25.459672       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0127 02:50:25.461621       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0127 02:50:26.168627       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0127 02:50:26.238156       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0127 02:50:27.412746       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0127 02:50:27.592262       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0127 02:50:27.654242       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0127 02:50:27.667685       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0127 02:50:28.968088       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0127 02:50:28.995852       1 controller.go:615] quota admission added evaluator for: endpoints
	I0127 02:50:29.008688       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [d62e04f6f5e13451192cf7b3275a99d64a2c9696cf576a5ee12858cfa3dc94c3] <==
	W0127 02:50:16.241973       1 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags
	I0127 02:50:16.242734       1 options.go:238] external host was not specified, using 192.168.50.58
	I0127 02:50:16.252137       1 server.go:143] Version: v1.32.1
	I0127 02:50:16.252478       1 server.go:145] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-controller-manager [723b4e88e62109878bd5c6cf65e85c07583b1be9a253ed0436768f103dcf7b56] <==
	
	
	==> kube-controller-manager [c3ce858dc5535eb7b195ff30036e1f1f5b6fdf5698b50df27ef513cc82e34256] <==
	I0127 02:50:28.604666       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0127 02:50:28.602436       1 shared_informer.go:320] Caches are synced for daemon sets
	I0127 02:50:28.602461       1 shared_informer.go:320] Caches are synced for endpoint
	I0127 02:50:28.614873       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0127 02:50:28.602481       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0127 02:50:28.602493       1 shared_informer.go:320] Caches are synced for persistent volume
	I0127 02:50:28.602512       1 shared_informer.go:320] Caches are synced for cronjob
	I0127 02:50:28.608152       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0127 02:50:28.620074       1 shared_informer.go:320] Caches are synced for stateful set
	I0127 02:50:28.622360       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0127 02:50:28.623837       1 shared_informer.go:320] Caches are synced for resource quota
	I0127 02:50:28.630247       1 shared_informer.go:320] Caches are synced for namespace
	I0127 02:50:28.635732       1 shared_informer.go:320] Caches are synced for ephemeral
	I0127 02:50:28.638271       1 shared_informer.go:320] Caches are synced for job
	I0127 02:50:28.643229       1 shared_informer.go:320] Caches are synced for garbage collector
	I0127 02:50:28.643254       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0127 02:50:28.643271       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0127 02:50:28.643692       1 shared_informer.go:320] Caches are synced for crt configmap
	I0127 02:50:28.645790       1 shared_informer.go:320] Caches are synced for attach detach
	I0127 02:50:28.645814       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0127 02:50:28.657986       1 shared_informer.go:320] Caches are synced for garbage collector
	I0127 02:50:28.985716       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="391.102136ms"
	I0127 02:50:28.989284       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="76.737µs"
	I0127 02:50:33.248242       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="618.069225ms"
	I0127 02:50:33.249453       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="58.678µs"
	
	
	==> kube-proxy [825a3f8f6a885295c9c9e9439f12deaad49e21c747b3809d5cff027c4b9e1819] <==
	
	
	==> kube-proxy [a8134cd7d02fa6732b6a82499a19cd8e722326cd3d251615c779006744788733] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 02:50:26.815625       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 02:50:26.829931       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.50.58"]
	E0127 02:50:26.830093       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 02:50:26.873423       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 02:50:26.873490       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 02:50:26.873526       1 server_linux.go:170] "Using iptables Proxier"
	I0127 02:50:26.878543       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 02:50:26.878959       1 server.go:497] "Version info" version="v1.32.1"
	I0127 02:50:26.878985       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 02:50:26.883624       1 config.go:329] "Starting node config controller"
	I0127 02:50:26.883733       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 02:50:26.884072       1 config.go:199] "Starting service config controller"
	I0127 02:50:26.884108       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 02:50:26.884132       1 config.go:105] "Starting endpoint slice config controller"
	I0127 02:50:26.884138       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 02:50:26.984361       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 02:50:26.984367       1 shared_informer.go:320] Caches are synced for node config
	I0127 02:50:26.984391       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [bcab89d41125a8414b58de1d77550e1159a49e4d71ff4369d142f7e3501b9229] <==
	
	
	==> kube-scheduler [c45f3cea74699716fd13b9d40fc91d0b27d5e1cc5cfcca8cc9e1c5dac734fd53] <==
	I0127 02:50:22.847597       1 serving.go:386] Generated self-signed cert in-memory
	I0127 02:50:25.492282       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0127 02:50:25.492382       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 02:50:25.512369       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0127 02:50:25.512757       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0127 02:50:25.512829       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0127 02:50:25.512901       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0127 02:50:25.512952       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 02:50:25.512991       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0127 02:50:25.513020       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0127 02:50:25.514973       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0127 02:50:25.613970       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0127 02:50:25.613984       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0127 02:50:25.614005       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 02:50:25 pause-622238 kubelet[3623]: I0127 02:50:25.461184    3623 kubelet_node_status.go:125] "Node was previously registered" node="pause-622238"
	Jan 27 02:50:25 pause-622238 kubelet[3623]: I0127 02:50:25.461351    3623 kubelet_node_status.go:79] "Successfully registered node" node="pause-622238"
	Jan 27 02:50:25 pause-622238 kubelet[3623]: I0127 02:50:25.461385    3623 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jan 27 02:50:25 pause-622238 kubelet[3623]: I0127 02:50:25.462592    3623 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jan 27 02:50:25 pause-622238 kubelet[3623]: I0127 02:50:25.490017    3623 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-622238"
	Jan 27 02:50:25 pause-622238 kubelet[3623]: E0127 02:50:25.559732    3623 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-622238\" already exists" pod="kube-system/kube-controller-manager-pause-622238"
	Jan 27 02:50:25 pause-622238 kubelet[3623]: I0127 02:50:25.559778    3623 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-622238"
	Jan 27 02:50:25 pause-622238 kubelet[3623]: E0127 02:50:25.575009    3623 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-622238\" already exists" pod="kube-system/kube-scheduler-pause-622238"
	Jan 27 02:50:25 pause-622238 kubelet[3623]: I0127 02:50:25.575051    3623 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-622238"
	Jan 27 02:50:25 pause-622238 kubelet[3623]: E0127 02:50:25.594277    3623 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-pause-622238\" already exists" pod="kube-system/etcd-pause-622238"
	Jan 27 02:50:25 pause-622238 kubelet[3623]: I0127 02:50:25.594375    3623 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-622238"
	Jan 27 02:50:25 pause-622238 kubelet[3623]: I0127 02:50:25.600438    3623 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-622238"
	Jan 27 02:50:25 pause-622238 kubelet[3623]: E0127 02:50:25.609444    3623 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-622238\" already exists" pod="kube-system/kube-apiserver-pause-622238"
	Jan 27 02:50:25 pause-622238 kubelet[3623]: E0127 02:50:25.612784    3623 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-pause-622238\" already exists" pod="kube-system/etcd-pause-622238"
	Jan 27 02:50:26 pause-622238 kubelet[3623]: I0127 02:50:26.074927    3623 apiserver.go:52] "Watching apiserver"
	Jan 27 02:50:26 pause-622238 kubelet[3623]: I0127 02:50:26.101092    3623 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jan 27 02:50:26 pause-622238 kubelet[3623]: I0127 02:50:26.158634    3623 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b532db91-62b9-4bee-bbc9-1613f5989325-xtables-lock\") pod \"kube-proxy-9pg5p\" (UID: \"b532db91-62b9-4bee-bbc9-1613f5989325\") " pod="kube-system/kube-proxy-9pg5p"
	Jan 27 02:50:26 pause-622238 kubelet[3623]: I0127 02:50:26.159227    3623 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b532db91-62b9-4bee-bbc9-1613f5989325-lib-modules\") pod \"kube-proxy-9pg5p\" (UID: \"b532db91-62b9-4bee-bbc9-1613f5989325\") " pod="kube-system/kube-proxy-9pg5p"
	Jan 27 02:50:26 pause-622238 kubelet[3623]: I0127 02:50:26.380990    3623 scope.go:117] "RemoveContainer" containerID="fe790ab890a4916fdd3f58edb0503dd303a6e1cf28eb9c753b63fc4ecbb7ddeb"
	Jan 27 02:50:26 pause-622238 kubelet[3623]: I0127 02:50:26.381630    3623 scope.go:117] "RemoveContainer" containerID="825a3f8f6a885295c9c9e9439f12deaad49e21c747b3809d5cff027c4b9e1819"
	Jan 27 02:50:31 pause-622238 kubelet[3623]: E0127 02:50:31.204199    3623 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737946231203422462,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 02:50:31 pause-622238 kubelet[3623]: E0127 02:50:31.204238    3623 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737946231203422462,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 02:50:32 pause-622238 kubelet[3623]: I0127 02:50:32.322349    3623 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jan 27 02:50:41 pause-622238 kubelet[3623]: E0127 02:50:41.207710    3623 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737946241206784868,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 02:50:41 pause-622238 kubelet[3623]: E0127 02:50:41.207752    3623 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737946241206784868,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-622238 -n pause-622238
helpers_test.go:261: (dbg) Run:  kubectl --context pause-622238 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-622238 -n pause-622238
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-622238 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-622238 logs -n 25: (1.318259609s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p scheduled-stop-739127       | scheduled-stop-739127  | jenkins | v1.35.0 | 27 Jan 25 02:46 UTC |                     |
	|         | --schedule 5m                  |                        |         |         |                     |                     |
	| stop    | -p scheduled-stop-739127       | scheduled-stop-739127  | jenkins | v1.35.0 | 27 Jan 25 02:46 UTC |                     |
	|         | --schedule 5m                  |                        |         |         |                     |                     |
	| stop    | -p scheduled-stop-739127       | scheduled-stop-739127  | jenkins | v1.35.0 | 27 Jan 25 02:46 UTC |                     |
	|         | --schedule 5m                  |                        |         |         |                     |                     |
	| stop    | -p scheduled-stop-739127       | scheduled-stop-739127  | jenkins | v1.35.0 | 27 Jan 25 02:46 UTC |                     |
	|         | --schedule 15s                 |                        |         |         |                     |                     |
	| stop    | -p scheduled-stop-739127       | scheduled-stop-739127  | jenkins | v1.35.0 | 27 Jan 25 02:46 UTC |                     |
	|         | --schedule 15s                 |                        |         |         |                     |                     |
	| stop    | -p scheduled-stop-739127       | scheduled-stop-739127  | jenkins | v1.35.0 | 27 Jan 25 02:46 UTC |                     |
	|         | --schedule 15s                 |                        |         |         |                     |                     |
	| stop    | -p scheduled-stop-739127       | scheduled-stop-739127  | jenkins | v1.35.0 | 27 Jan 25 02:46 UTC | 27 Jan 25 02:46 UTC |
	|         | --cancel-scheduled             |                        |         |         |                     |                     |
	| stop    | -p scheduled-stop-739127       | scheduled-stop-739127  | jenkins | v1.35.0 | 27 Jan 25 02:47 UTC |                     |
	|         | --schedule 15s                 |                        |         |         |                     |                     |
	| stop    | -p scheduled-stop-739127       | scheduled-stop-739127  | jenkins | v1.35.0 | 27 Jan 25 02:47 UTC |                     |
	|         | --schedule 15s                 |                        |         |         |                     |                     |
	| stop    | -p scheduled-stop-739127       | scheduled-stop-739127  | jenkins | v1.35.0 | 27 Jan 25 02:47 UTC | 27 Jan 25 02:47 UTC |
	|         | --schedule 15s                 |                        |         |         |                     |                     |
	| delete  | -p scheduled-stop-739127       | scheduled-stop-739127  | jenkins | v1.35.0 | 27 Jan 25 02:47 UTC | 27 Jan 25 02:47 UTC |
	| start   | -p offline-crio-922784         | offline-crio-922784    | jenkins | v1.35.0 | 27 Jan 25 02:47 UTC | 27 Jan 25 02:48 UTC |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | -v=1 --memory=2048             |                        |         |         |                     |                     |
	|         | --wait=true --driver=kvm2      |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| start   | -p NoKubernetes-954952         | NoKubernetes-954952    | jenkins | v1.35.0 | 27 Jan 25 02:47 UTC |                     |
	|         | --no-kubernetes                |                        |         |         |                     |                     |
	|         | --kubernetes-version=1.20      |                        |         |         |                     |                     |
	|         | --driver=kvm2                  |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| start   | -p pause-622238 --memory=2048  | pause-622238           | jenkins | v1.35.0 | 27 Jan 25 02:47 UTC | 27 Jan 25 02:49 UTC |
	|         | --install-addons=false         |                        |         |         |                     |                     |
	|         | --wait=all --driver=kvm2       |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| start   | -p NoKubernetes-954952         | NoKubernetes-954952    | jenkins | v1.35.0 | 27 Jan 25 02:47 UTC | 27 Jan 25 02:49 UTC |
	|         | --driver=kvm2                  |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| start   | -p running-upgrade-078958      | minikube               | jenkins | v1.26.0 | 27 Jan 25 02:48 UTC | 27 Jan 25 02:50 UTC |
	|         | --memory=2200 --vm-driver=kvm2 |                        |         |         |                     |                     |
	|         |  --container-runtime=crio      |                        |         |         |                     |                     |
	| delete  | -p offline-crio-922784         | offline-crio-922784    | jenkins | v1.35.0 | 27 Jan 25 02:48 UTC | 27 Jan 25 02:48 UTC |
	| start   | -p stopped-upgrade-883403      | minikube               | jenkins | v1.26.0 | 27 Jan 25 02:48 UTC | 27 Jan 25 02:50 UTC |
	|         | --memory=2200 --vm-driver=kvm2 |                        |         |         |                     |                     |
	|         |  --container-runtime=crio      |                        |         |         |                     |                     |
	| start   | -p NoKubernetes-954952         | NoKubernetes-954952    | jenkins | v1.35.0 | 27 Jan 25 02:49 UTC | 27 Jan 25 02:50 UTC |
	|         | --no-kubernetes --driver=kvm2  |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| start   | -p pause-622238                | pause-622238           | jenkins | v1.35.0 | 27 Jan 25 02:49 UTC | 27 Jan 25 02:50 UTC |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| start   | -p running-upgrade-078958      | running-upgrade-078958 | jenkins | v1.35.0 | 27 Jan 25 02:50 UTC |                     |
	|         | --memory=2200                  |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | -p NoKubernetes-954952         | NoKubernetes-954952    | jenkins | v1.35.0 | 27 Jan 25 02:50 UTC | 27 Jan 25 02:50 UTC |
	| start   | -p NoKubernetes-954952         | NoKubernetes-954952    | jenkins | v1.35.0 | 27 Jan 25 02:50 UTC |                     |
	|         | --no-kubernetes --driver=kvm2  |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| stop    | stopped-upgrade-883403 stop    | minikube               | jenkins | v1.26.0 | 27 Jan 25 02:50 UTC | 27 Jan 25 02:50 UTC |
	| start   | -p stopped-upgrade-883403      | stopped-upgrade-883403 | jenkins | v1.35.0 | 27 Jan 25 02:50 UTC |                     |
	|         | --memory=2200                  |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 02:50:31
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 02:50:31.519215  940301 out.go:345] Setting OutFile to fd 1 ...
	I0127 02:50:31.519315  940301 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:50:31.519320  940301 out.go:358] Setting ErrFile to fd 2...
	I0127 02:50:31.519324  940301 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:50:31.519508  940301 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-897624/.minikube/bin
	I0127 02:50:31.520038  940301 out.go:352] Setting JSON to false
	I0127 02:50:31.521144  940301 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":12774,"bootTime":1737933457,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 02:50:31.521250  940301 start.go:139] virtualization: kvm guest
	I0127 02:50:31.523572  940301 out.go:177] * [stopped-upgrade-883403] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 02:50:31.524756  940301 notify.go:220] Checking for updates...
	I0127 02:50:31.524795  940301 out.go:177]   - MINIKUBE_LOCATION=20316
	I0127 02:50:31.526052  940301 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 02:50:31.527312  940301 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20316-897624/kubeconfig
	I0127 02:50:31.528586  940301 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-897624/.minikube
	I0127 02:50:31.529897  940301 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 02:50:31.531078  940301 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 02:50:31.532745  940301 config.go:182] Loaded profile config "stopped-upgrade-883403": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0127 02:50:31.533347  940301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 02:50:31.533413  940301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:50:31.549435  940301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44901
	I0127 02:50:31.549962  940301 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:50:31.550589  940301 main.go:141] libmachine: Using API Version  1
	I0127 02:50:31.550618  940301 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:50:31.551070  940301 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:50:31.551296  940301 main.go:141] libmachine: (stopped-upgrade-883403) Calling .DriverName
	I0127 02:50:31.553028  940301 out.go:177] * Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	I0127 02:50:31.554232  940301 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 02:50:31.554531  940301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 02:50:31.554569  940301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:50:31.571436  940301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39605
	I0127 02:50:31.572019  940301 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:50:31.572536  940301 main.go:141] libmachine: Using API Version  1
	I0127 02:50:31.572564  940301 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:50:31.573006  940301 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:50:31.573232  940301 main.go:141] libmachine: (stopped-upgrade-883403) Calling .DriverName
	I0127 02:50:31.612735  940301 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 02:50:26.838856  939817 crio.go:462] duration metric: took 2.220657856s to copy over tarball
	I0127 02:50:26.838949  939817 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 02:50:31.479385  939817 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (4.640404948s)
	I0127 02:50:31.479417  939817 crio.go:469] duration metric: took 4.640515582s to extract the tarball
	I0127 02:50:31.479427  939817 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 02:50:31.532051  939817 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 02:50:31.567228  939817 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.1". assuming images are not preloaded.
	I0127 02:50:31.567259  939817 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0127 02:50:31.567346  939817 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 02:50:31.567385  939817 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0127 02:50:31.567404  939817 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0127 02:50:31.567412  939817 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0127 02:50:31.567442  939817 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0127 02:50:31.567461  939817 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0127 02:50:31.567369  939817 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0127 02:50:31.567554  939817 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0127 02:50:31.568793  939817 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0127 02:50:31.568824  939817 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0127 02:50:31.568985  939817 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0127 02:50:31.568988  939817 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0127 02:50:31.569033  939817 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 02:50:31.568803  939817 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0127 02:50:31.569104  939817 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0127 02:50:31.569181  939817 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0127 02:50:31.613977  940301 start.go:297] selected driver: kvm2
	I0127 02:50:31.613999  940301 start.go:901] validating driver "kvm2" against &{Name:stopped-upgrade-883403 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-883
403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.48 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0127 02:50:31.614142  940301 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 02:50:31.615154  940301 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 02:50:31.615249  940301 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20316-897624/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 02:50:31.631242  940301 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 02:50:31.631746  940301 cni.go:84] Creating CNI manager for ""
	I0127 02:50:31.631813  940301 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 02:50:31.631900  940301 start.go:340] cluster config:
	{Name:stopped-upgrade-883403 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-883403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.48 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0127 02:50:31.632055  940301 iso.go:125] acquiring lock: {Name:mkbbe08fa3daa2372045069667a0a52e0e34abd9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 02:50:31.634701  940301 out.go:177] * Starting "stopped-upgrade-883403" primary control-plane node in "stopped-upgrade-883403" cluster
	I0127 02:50:32.674872  939995 main.go:141] libmachine: (NoKubernetes-954952) DBG | domain NoKubernetes-954952 has defined MAC address 52:54:00:f8:f3:cb in network mk-NoKubernetes-954952
	I0127 02:50:32.675375  939995 main.go:141] libmachine: (NoKubernetes-954952) DBG | unable to find current IP address of domain NoKubernetes-954952 in network mk-NoKubernetes-954952
	I0127 02:50:32.675398  939995 main.go:141] libmachine: (NoKubernetes-954952) DBG | I0127 02:50:32.675359  940039 retry.go:31] will retry after 3.895825804s: waiting for domain to come up
	I0127 02:50:32.222491  939626 pod_ready.go:103] pod "coredns-668d6bf9bc-f22h8" in "kube-system" namespace has status "Ready":"False"
	I0127 02:50:33.264748  939626 pod_ready.go:93] pod "coredns-668d6bf9bc-f22h8" in "kube-system" namespace has status "Ready":"True"
	I0127 02:50:33.264779  939626 pod_ready.go:82] duration metric: took 5.548789192s for pod "coredns-668d6bf9bc-f22h8" in "kube-system" namespace to be "Ready" ...
	I0127 02:50:33.264803  939626 pod_ready.go:79] waiting up to 4m0s for pod "etcd-pause-622238" in "kube-system" namespace to be "Ready" ...
	I0127 02:50:35.272412  939626 pod_ready.go:103] pod "etcd-pause-622238" in "kube-system" namespace has status "Ready":"False"
	I0127 02:50:35.771681  939626 pod_ready.go:93] pod "etcd-pause-622238" in "kube-system" namespace has status "Ready":"True"
	I0127 02:50:35.771707  939626 pod_ready.go:82] duration metric: took 2.506896551s for pod "etcd-pause-622238" in "kube-system" namespace to be "Ready" ...
	I0127 02:50:35.771717  939626 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-pause-622238" in "kube-system" namespace to be "Ready" ...
	I0127 02:50:31.635870  940301 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0127 02:50:31.635916  940301 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4
	I0127 02:50:31.635926  940301 cache.go:56] Caching tarball of preloaded images
	I0127 02:50:31.636030  940301 preload.go:172] Found /home/jenkins/minikube-integration/20316-897624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 02:50:31.636045  940301 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on crio
	I0127 02:50:31.636135  940301 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/stopped-upgrade-883403/config.json ...
	I0127 02:50:31.636313  940301 start.go:360] acquireMachinesLock for stopped-upgrade-883403: {Name:mke71211974c740845fb9b00041df14f4e3ecd74 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 02:50:31.783239  939817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0127 02:50:31.788799  939817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0127 02:50:31.790347  939817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0127 02:50:31.796181  939817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0127 02:50:31.801517  939817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0127 02:50:31.807446  939817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0127 02:50:31.812788  939817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0127 02:50:31.926322  939817 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "b4ea7e648530d171b38f67305e22caf49f9d968d71c558e663709b805076538d" in container runtime
	I0127 02:50:31.926395  939817 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0127 02:50:31.926447  939817 ssh_runner.go:195] Run: which crictl
	I0127 02:50:31.987785  939817 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0127 02:50:31.987851  939817 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0127 02:50:31.987847  939817 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "beb86f5d8e6cd2234ca24649b74bd10e1e12446764560a3804d85dd6815d0a18" in container runtime
	I0127 02:50:31.987885  939817 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0127 02:50:31.987902  939817 ssh_runner.go:195] Run: which crictl
	I0127 02:50:31.987933  939817 ssh_runner.go:195] Run: which crictl
	I0127 02:50:32.037925  939817 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "e9f4b425f9192c11c0fa338cabe04f832aa5cea6dcbba2d1bd2a931224421693" in container runtime
	I0127 02:50:32.037966  939817 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0127 02:50:32.037995  939817 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0127 02:50:32.038005  939817 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0127 02:50:32.038036  939817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0127 02:50:32.038057  939817 ssh_runner.go:195] Run: which crictl
	I0127 02:50:32.037941  939817 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "18688a72645c5d34e1cc70d8deb5bef4fc6c9073bb1b53c7812856afc1de1237" in container runtime
	I0127 02:50:32.038109  939817 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0127 02:50:32.038114  939817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.1
	I0127 02:50:32.038041  939817 ssh_runner.go:195] Run: which crictl
	I0127 02:50:32.038119  939817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0127 02:50:32.038152  939817 ssh_runner.go:195] Run: which crictl
	I0127 02:50:32.097031  939817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0127 02:50:32.097077  939817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0127 02:50:32.097117  939817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0127 02:50:32.097174  939817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.1
	I0127 02:50:32.097189  939817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0127 02:50:32.097252  939817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0127 02:50:32.173723  939817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0127 02:50:32.173856  939817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0127 02:50:32.188112  939817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0127 02:50:32.188182  939817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.1
	I0127 02:50:32.188224  939817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0127 02:50:32.188226  939817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0127 02:50:32.258993  939817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0127 02:50:32.259054  939817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0127 02:50:32.259101  939817 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0127 02:50:32.303881  939817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0127 02:50:32.306812  939817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.1
	I0127 02:50:32.307905  939817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0127 02:50:32.307917  939817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0127 02:50:32.333461  939817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.1
	I0127 02:50:32.333528  939817 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0127 02:50:32.333572  939817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (102146048 bytes)
	I0127 02:50:32.361146  939817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0127 02:50:32.361279  939817 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0127 02:50:32.379566  939817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.1
	I0127 02:50:32.398572  939817 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0127 02:50:32.398615  939817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (13586432 bytes)
	I0127 02:50:32.536504  939817 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0127 02:50:32.536596  939817 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0127 02:50:32.706131  939817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 02:50:33.625849  939817 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6: (1.089218267s)
	I0127 02:50:33.625887  939817 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0127 02:50:33.625925  939817 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0127 02:50:33.625998  939817 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0127 02:50:35.680003  939817 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.053967898s)
	I0127 02:50:35.680045  939817 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0127 02:50:35.680096  939817 cache_images.go:92] duration metric: took 4.112821823s to LoadCachedImages
	W0127 02:50:35.680173  939817 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0127 02:50:35.680194  939817 kubeadm.go:934] updating node { 192.168.39.156 8443 v1.24.1 crio true true} ...
	I0127 02:50:35.680319  939817 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=running-upgrade-078958 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.156
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-078958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 02:50:35.680405  939817 ssh_runner.go:195] Run: crio config
	I0127 02:50:35.720474  939817 cni.go:84] Creating CNI manager for ""
	I0127 02:50:35.720509  939817 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 02:50:35.720521  939817 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 02:50:35.720548  939817 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.156 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-078958 NodeName:running-upgrade-078958 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.156"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.156 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 02:50:35.720740  939817 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.156
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "running-upgrade-078958"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.156
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.156"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 02:50:35.720821  939817 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0127 02:50:35.728831  939817 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 02:50:35.728944  939817 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 02:50:35.736344  939817 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0127 02:50:35.749973  939817 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 02:50:35.764163  939817 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0127 02:50:35.780217  939817 ssh_runner.go:195] Run: grep 192.168.39.156	control-plane.minikube.internal$ /etc/hosts
	I0127 02:50:35.783334  939817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 02:50:35.909553  939817 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 02:50:35.922007  939817 certs.go:68] Setting up /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/running-upgrade-078958 for IP: 192.168.39.156
	I0127 02:50:35.922040  939817 certs.go:194] generating shared ca certs ...
	I0127 02:50:35.922064  939817 certs.go:226] acquiring lock for ca certs: {Name:mk2a0a86c4d196cb3cdcd1c836209954479044e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:50:35.922276  939817 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20316-897624/.minikube/ca.key
	I0127 02:50:35.922326  939817 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20316-897624/.minikube/proxy-client-ca.key
	I0127 02:50:35.922338  939817 certs.go:256] generating profile certs ...
	I0127 02:50:35.922432  939817 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/running-upgrade-078958/client.key
	I0127 02:50:35.922458  939817 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/running-upgrade-078958/apiserver.key.71fad329
	I0127 02:50:35.922482  939817 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/running-upgrade-078958/apiserver.crt.71fad329 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.156]
	I0127 02:50:36.026000  939817 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/running-upgrade-078958/apiserver.crt.71fad329 ...
	I0127 02:50:36.026037  939817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/running-upgrade-078958/apiserver.crt.71fad329: {Name:mk6f7f9dc2bf1ddc776a189d909124c00fa38061 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:50:36.026219  939817 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/running-upgrade-078958/apiserver.key.71fad329 ...
	I0127 02:50:36.026234  939817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/running-upgrade-078958/apiserver.key.71fad329: {Name:mkcc6a93879d51b776fbb9cbb1d304cddf8acd1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:50:36.026306  939817 certs.go:381] copying /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/running-upgrade-078958/apiserver.crt.71fad329 -> /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/running-upgrade-078958/apiserver.crt
	I0127 02:50:36.026477  939817 certs.go:385] copying /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/running-upgrade-078958/apiserver.key.71fad329 -> /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/running-upgrade-078958/apiserver.key
	I0127 02:50:36.026623  939817 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/running-upgrade-078958/proxy-client.key
	I0127 02:50:36.026737  939817 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/904889.pem (1338 bytes)
	W0127 02:50:36.026768  939817 certs.go:480] ignoring /home/jenkins/minikube-integration/20316-897624/.minikube/certs/904889_empty.pem, impossibly tiny 0 bytes
	I0127 02:50:36.026778  939817 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 02:50:36.026799  939817 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem (1078 bytes)
	I0127 02:50:36.026833  939817 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/cert.pem (1123 bytes)
	I0127 02:50:36.026854  939817 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/key.pem (1679 bytes)
	I0127 02:50:36.026899  939817 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem (1708 bytes)
	I0127 02:50:36.027547  939817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 02:50:36.055013  939817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 02:50:36.077238  939817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 02:50:36.096526  939817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 02:50:36.117518  939817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/running-upgrade-078958/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0127 02:50:36.148005  939817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/running-upgrade-078958/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 02:50:36.170689  939817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/running-upgrade-078958/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 02:50:36.190458  939817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/running-upgrade-078958/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 02:50:36.211348  939817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 02:50:36.231485  939817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/certs/904889.pem --> /usr/share/ca-certificates/904889.pem (1338 bytes)
	I0127 02:50:36.250627  939817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem --> /usr/share/ca-certificates/9048892.pem (1708 bytes)
	I0127 02:50:36.270518  939817 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 02:50:36.285047  939817 ssh_runner.go:195] Run: openssl version
	I0127 02:50:36.289829  939817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9048892.pem && ln -fs /usr/share/ca-certificates/9048892.pem /etc/ssl/certs/9048892.pem"
	I0127 02:50:36.299555  939817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9048892.pem
	I0127 02:50:36.303658  939817 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 01:57 /usr/share/ca-certificates/9048892.pem
	I0127 02:50:36.303724  939817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9048892.pem
	I0127 02:50:36.308759  939817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9048892.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 02:50:36.317636  939817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 02:50:36.326680  939817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 02:50:36.330673  939817 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 01:48 /usr/share/ca-certificates/minikubeCA.pem
	I0127 02:50:36.330775  939817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 02:50:36.335566  939817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 02:50:36.343261  939817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/904889.pem && ln -fs /usr/share/ca-certificates/904889.pem /etc/ssl/certs/904889.pem"
	I0127 02:50:36.352901  939817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/904889.pem
	I0127 02:50:36.357326  939817 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 01:57 /usr/share/ca-certificates/904889.pem
	I0127 02:50:36.357388  939817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/904889.pem
	I0127 02:50:36.362550  939817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/904889.pem /etc/ssl/certs/51391683.0"
	I0127 02:50:36.369804  939817 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 02:50:36.373575  939817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 02:50:36.379041  939817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 02:50:36.383863  939817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 02:50:36.388730  939817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 02:50:36.394108  939817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 02:50:36.398724  939817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 02:50:36.407602  939817 kubeadm.go:392] StartCluster: {Name:running-upgrade-078958 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-078958 Namespace:defa
ult APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.156 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0127 02:50:36.407690  939817 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 02:50:36.407762  939817 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 02:50:36.435183  939817 cri.go:89] found id: "3fee94a2fa7e4422e88c007f82289b7c12824d207d6dd4c99341c9f7dbd6ab58"
	I0127 02:50:36.435208  939817 cri.go:89] found id: "2f6d9a2c191d6c4b5df4ae47f4282d605578c6983bd404e86efcdd6976766bf8"
	I0127 02:50:36.435212  939817 cri.go:89] found id: "2978847c1472fbd1f55edb9152b02b426763d55b42d6735fd1af38b990794906"
	I0127 02:50:36.435215  939817 cri.go:89] found id: "63c3964a5eabc3335c39cee8e6e2f152f8d5aefa1430ad891496fa3685d4ba91"
	I0127 02:50:36.435218  939817 cri.go:89] found id: ""
	I0127 02:50:36.435272  939817 ssh_runner.go:195] Run: sudo runc list -f json
	I0127 02:50:36.458191  939817 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"2978847c1472fbd1f55edb9152b02b426763d55b42d6735fd1af38b990794906","pid":1052,"status":"running","bundle":"/run/containers/storage/overlay-containers/2978847c1472fbd1f55edb9152b02b426763d55b42d6735fd1af38b990794906/userdata","rootfs":"/var/lib/containers/storage/overlay/5f521f98f9413818d12ffa435880938388ed4aca451d588597ae85e364f7dfe9/merged","created":"2025-01-27T02:49:55.374056791Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"911c4b27","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"911c4b27\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.termina
tionMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"2978847c1472fbd1f55edb9152b02b426763d55b42d6735fd1af38b990794906","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-01-27T02:49:55.279850675Z","io.kubernetes.cri-o.Image":"e9f4b425f9192c11c0fa338cabe04f832aa5cea6dcbba2d1bd2a931224421693","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-apiserver:v1.24.1","io.kubernetes.cri-o.ImageRef":"e9f4b425f9192c11c0fa338cabe04f832aa5cea6dcbba2d1bd2a931224421693","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-running-upgrade-078958\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"aa6b1ad2a3e18dae7ecd309d3ee896a2\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-running-upgrade-078958_aa6b1ad2a3e18dae7ecd309d3ee896a2/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apis
erver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/5f521f98f9413818d12ffa435880938388ed4aca451d588597ae85e364f7dfe9/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-running-upgrade-078958_kube-system_aa6b1ad2a3e18dae7ecd309d3ee896a2_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/9ae1f057d0a7f9f5730f352d0b7272fe308b01e54c2de6ec75b53424bd7ac182/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"9ae1f057d0a7f9f5730f352d0b7272fe308b01e54c2de6ec75b53424bd7ac182","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-running-upgrade-078958_kube-system_aa6b1ad2a3e18dae7ecd309d3ee896a2_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/aa6b1ad2a3e18dae7ecd309d3ee896a2/etc-hosts\",\"readonly\":false},{\"cont
ainer_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/aa6b1ad2a3e18dae7ecd309d3ee896a2/containers/kube-apiserver/fcff9216\",\"readonly\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-apiserver-running-upgrade-078958","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"aa6b1ad2a3e18dae7ecd309d3ee896a2","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.156:8443","kubernetes.io/config.hash":"aa6b1ad2a3e18dae7ecd309d3ee896a2","kubernetes.io/config.seen":"2025-01-27T02:49:53.249113216Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.Collec
tMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2f6d9a2c191d6c4b5df4ae47f4282d605578c6983bd404e86efcdd6976766bf8","pid":1070,"status":"running","bundle":"/run/containers/storage/overlay-containers/2f6d9a2c191d6c4b5df4ae47f4282d605578c6983bd404e86efcdd6976766bf8/userdata","rootfs":"/var/lib/containers/storage/overlay/2936cc5689b3a7c67ee38391192fe7ec4f3141f682b21520111e04fc544f0f22/merged","created":"2025-01-27T02:49:55.484134405Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"eff52b7d","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"eff52b7d\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernete
s.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"2f6d9a2c191d6c4b5df4ae47f4282d605578c6983bd404e86efcdd6976766bf8","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-01-27T02:49:55.39518116Z","io.kubernetes.cri-o.Image":"18688a72645c5d34e1cc70d8deb5bef4fc6c9073bb1b53c7812856afc1de1237","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-scheduler:v1.24.1","io.kubernetes.cri-o.ImageRef":"18688a72645c5d34e1cc70d8deb5bef4fc6c9073bb1b53c7812856afc1de1237","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-running-upgrade-078958\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"90002065a0378229711bc7c07d28de07\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-running-upgrade-078958_90002065a03782
29711bc7c07d28de07/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/2936cc5689b3a7c67ee38391192fe7ec4f3141f682b21520111e04fc544f0f22/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-running-upgrade-078958_kube-system_90002065a0378229711bc7c07d28de07_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/f5c818dbaa9b364c4f0dc32bf3fa13c7e8aef204091ad83c1b33b8452582de9b/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"f5c818dbaa9b364c4f0dc32bf3fa13c7e8aef204091ad83c1b33b8452582de9b","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-running-upgrade-078958_kube-system_90002065a0378229711bc7c07d28de07_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"
/var/lib/kubelet/pods/90002065a0378229711bc7c07d28de07/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/90002065a0378229711bc7c07d28de07/containers/kube-scheduler/2bfe1ac7\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-scheduler-running-upgrade-078958","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"90002065a0378229711bc7c07d28de07","kubernetes.io/config.hash":"90002065a0378229711bc7c07d28de07","kubernetes.io/config.seen":"2025-01-27T02:49:53.249115491Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"
3fee94a2fa7e4422e88c007f82289b7c12824d207d6dd4c99341c9f7dbd6ab58","pid":1101,"status":"running","bundle":"/run/containers/storage/overlay-containers/3fee94a2fa7e4422e88c007f82289b7c12824d207d6dd4c99341c9f7dbd6ab58/userdata","rootfs":"/var/lib/containers/storage/overlay/609abfee6174fb3fe9afadd2b9599cc3ad5ebc4e0631eddaed4067020b343dda/merged","created":"2025-01-27T02:49:55.87661325Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"4d0dbe90","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"4d0dbe90\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-
o.ContainerID":"3fee94a2fa7e4422e88c007f82289b7c12824d207d6dd4c99341c9f7dbd6ab58","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-01-27T02:49:55.675511865Z","io.kubernetes.cri-o.Image":"aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/etcd:3.5.3-0","io.kubernetes.cri-o.ImageRef":"aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-running-upgrade-078958\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"df8e097e73221d9c081cff50339554fb\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-running-upgrade-078958_df8e097e73221d9c081cff50339554fb/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/609abfee6174fb3fe9afadd2b9599cc3ad5ebc4e0631eddaed4067020b343dda/merged","io.kuber
netes.cri-o.Name":"k8s_etcd_etcd-running-upgrade-078958_kube-system_df8e097e73221d9c081cff50339554fb_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/5fb9d22f297310f96fb8c4664cabe5be60ff6d7d1d4bb9a865363f723c040839/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"5fb9d22f297310f96fb8c4664cabe5be60ff6d7d1d4bb9a865363f723c040839","io.kubernetes.cri-o.SandboxName":"k8s_etcd-running-upgrade-078958_kube-system_df8e097e73221d9c081cff50339554fb_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/df8e097e73221d9c081cff50339554fb/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/df8e097e73221d9c081cff50339554fb/containers/etcd/dc4d494d\",\"readonly\":false},{\"container_path\":\"/var/lib/minik
ube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false}]","io.kubernetes.pod.name":"etcd-running-upgrade-078958","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"df8e097e73221d9c081cff50339554fb","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.156:2379","kubernetes.io/config.hash":"df8e097e73221d9c081cff50339554fb","kubernetes.io/config.seen":"2025-01-27T02:49:53.249067788Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5fb9d22f297310f96fb8c4664cabe5be60ff6d7d1d4bb9a865363f723c040839","pid":979,"status":"running","bundle":"/run/containers
/storage/overlay-containers/5fb9d22f297310f96fb8c4664cabe5be60ff6d7d1d4bb9a865363f723c040839/userdata","rootfs":"/var/lib/containers/storage/overlay/fdb5851d5627361ee0dc37f342541b619002dd58fb27402b319f697cb48bbf2c/merged","created":"2025-01-27T02:49:54.816195525Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.39.156:2379\",\"kubernetes.io/config.seen\":\"2025-01-27T02:49:53.249067788Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"df8e097e73221d9c081cff50339554fb\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-poddf8e097e73221d9c081cff50339554fb.slice","io.kubernetes.cri-o.ContainerID":"5fb9d22f297310f96fb8c4664cabe5be60ff6d7d1d4bb9a865363f723c040839","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-running-upgrade-078958_kube-system_df8e097e73221d9c081cff50339554fb_0","io.kubernetes.cri-o.ContainerType":"sand
box","io.kubernetes.cri-o.Created":"2025-01-27T02:49:54.619335447Z","io.kubernetes.cri-o.HostName":"running-upgrade-078958","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/5fb9d22f297310f96fb8c4664cabe5be60ff6d7d1d4bb9a865363f723c040839/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"etcd-running-upgrade-078958","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"df8e097e73221d9c081cff50339554fb\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"etcd-running-upgrade-078958\",\"component\":\"etcd\",\"tier\":\"control-plane\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-running-upgrade-078958_df8e097e73221d9c081cff50339554fb/5fb9d22f297310f96fb8c4664cabe5be60ff6d7d1d4bb9a865363f723c040839.log","io.kubernetes.cri-o.Metadata":"{\"Name\":\"etcd-running-upgrade-078958\",\"UID\":\"df8e097e73221d
9c081cff50339554fb\",\"Namespace\":\"kube-system\",\"Attempt\":0}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/fdb5851d5627361ee0dc37f342541b619002dd58fb27402b319f697cb48bbf2c/merged","io.kubernetes.cri-o.Name":"k8s_etcd-running-upgrade-078958_kube-system_df8e097e73221d9c081cff50339554fb_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/5fb9d22f297310f96fb8c4664cabe5be60ff6d7d1d4bb9a865363f723c040839/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"5fb9d22f297310f96fb8c4664cabe5be60ff6d7d1d4bb9a865363f723c040839","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/5fb9d22f297310f96fb8c4664cabe5be60ff6d7d1d4bb9a865363f723c0408
39/userdata/shm","io.kubernetes.pod.name":"etcd-running-upgrade-078958","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"df8e097e73221d9c081cff50339554fb","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.156:2379","kubernetes.io/config.hash":"df8e097e73221d9c081cff50339554fb","kubernetes.io/config.seen":"2025-01-27T02:49:53.249067788Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"63c3964a5eabc3335c39cee8e6e2f152f8d5aefa1430ad891496fa3685d4ba91","pid":1017,"status":"running","bundle":"/run/containers/storage/overlay-containers/63c3964a5eabc3335c39cee8e6e2f152f8d5aefa1430ad891496fa3685d4ba91/userdata","rootfs":"/var/lib/containers/storage/overlay/6c5d9e74224779bd114d5c13f236188a7e8f5445f5a33a081a0978791a795136/merged","created":"2025-01-27T02:49:55.237331165Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"1c682979","io
.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"1c682979\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"63c3964a5eabc3335c39cee8e6e2f152f8d5aefa1430ad891496fa3685d4ba91","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-01-27T02:49:55.13951357Z","io.kubernetes.cri-o.Image":"b4ea7e648530d171b38f67305e22caf49f9d968d71c558e663709b805076538d","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-controller-manager:v1.24.1","io.kubernetes.cri-o.ImageRef":"b4ea7e648530d171b38f67305e22caf49f9d968d71c558e663709b8050765
38d","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-running-upgrade-078958\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"2b90f71684ea3d808f7d6100624bc03d\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-running-upgrade-078958_2b90f71684ea3d808f7d6100624bc03d/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/6c5d9e74224779bd114d5c13f236188a7e8f5445f5a33a081a0978791a795136/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-running-upgrade-078958_kube-system_2b90f71684ea3d808f7d6100624bc03d_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/fe029dd9ec6645e1c570dbce1f8d08df55760c9ca7ed380cc4c1579176201780/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"f
e029dd9ec6645e1c570dbce1f8d08df55760c9ca7ed380cc4c1579176201780","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-running-upgrade-078958_kube-system_2b90f71684ea3d808f7d6100624bc03d_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/2b90f71684ea3d808f7d6100624bc03d/containers/kube-controller-manager/bbf0657d\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/2b90f71684ea3d808f7d6100624bc03d/etc-hosts\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/ssl/certs\",\"host_pa
th\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false}]","io.kubernetes.pod.name":"kube-controller-manager-running-upgrade-078958","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"2b90f71684ea3d808f7d6100624bc03d","kubernetes.io/config.hash":"2b90f71684ea3d808f7d6100624bc03d","kubernetes.io/config.seen":"2025-01-27T02:49:53.249114461Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9ae1f057d0a7f9f5730f352d0b7272fe308b01e54c2de6ec75b53424
bd7ac182","pid":971,"status":"running","bundle":"/run/containers/storage/overlay-containers/9ae1f057d0a7f9f5730f352d0b7272fe308b01e54c2de6ec75b53424bd7ac182/userdata","rootfs":"/var/lib/containers/storage/overlay/74a6482c994a439196b12a22b25dd2f7c46b5c76ac8a31bbdb7e525931de5bcc/merged","created":"2025-01-27T02:49:54.76796547Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2025-01-27T02:49:53.249113216Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"aa6b1ad2a3e18dae7ecd309d3ee896a2\",\"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\":\"192.168.39.156:8443\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-podaa6b1ad2a3e18dae7ecd309d3ee896a2.slice","io.kubernetes.cri-o.ContainerID":"9ae1f057d0a7f9f5730f352d0b7272fe308b01e54c2de6ec75b53424bd7ac182","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-running-upgrade-078958
_kube-system_aa6b1ad2a3e18dae7ecd309d3ee896a2_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2025-01-27T02:49:54.596164276Z","io.kubernetes.cri-o.HostName":"running-upgrade-078958","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/9ae1f057d0a7f9f5730f352d0b7272fe308b01e54c2de6ec75b53424bd7ac182/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-apiserver-running-upgrade-078958","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-apiserver-running-upgrade-078958\",\"component\":\"kube-apiserver\",\"tier\":\"control-plane\",\"io.kubernetes.pod.uid\":\"aa6b1ad2a3e18dae7ecd309d3ee896a2\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-running-upgrade-078958_aa6b1ad2a3e18dae7ecd309d3ee896a2/9ae1f057d0a7f9f5730f352d0b7272fe308b01
e54c2de6ec75b53424bd7ac182.log","io.kubernetes.cri-o.Metadata":"{\"Name\":\"kube-apiserver-running-upgrade-078958\",\"UID\":\"aa6b1ad2a3e18dae7ecd309d3ee896a2\",\"Namespace\":\"kube-system\",\"Attempt\":0}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/74a6482c994a439196b12a22b25dd2f7c46b5c76ac8a31bbdb7e525931de5bcc/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-running-upgrade-078958_kube-system_aa6b1ad2a3e18dae7ecd309d3ee896a2_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/9ae1f057d0a7f9f5730f352d0b7272fe308b01e54c2de6ec75b53424bd7ac182/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"9ae1f057d0a7f9f5730f352d0b7272fe308b01e54c2de6ec75b53424bd7ac182","io.kubernetes.cri-o.SeccompProfilePath":"runtime
/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/9ae1f057d0a7f9f5730f352d0b7272fe308b01e54c2de6ec75b53424bd7ac182/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-running-upgrade-078958","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"aa6b1ad2a3e18dae7ecd309d3ee896a2","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.156:8443","kubernetes.io/config.hash":"aa6b1ad2a3e18dae7ecd309d3ee896a2","kubernetes.io/config.seen":"2025-01-27T02:49:53.249113216Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f5c818dbaa9b364c4f0dc32bf3fa13c7e8aef204091ad83c1b33b8452582de9b","pid":945,"status":"running","bundle":"/run/containers/storage/overlay-containers/f5c818dbaa9b364c4f0dc32bf3fa13c7e8aef204091ad83c1b33b8452582de9b/userdata","rootfs":"/var/lib/containers/storage/overlay/a69df51a0a5afd3a1644e9dd576301ab5496a9916
53d68c1594be7d9d009b7ac/merged","created":"2025-01-27T02:49:54.68111673Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2025-01-27T02:49:53.249115491Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"90002065a0378229711bc7c07d28de07\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pod90002065a0378229711bc7c07d28de07.slice","io.kubernetes.cri-o.ContainerID":"f5c818dbaa9b364c4f0dc32bf3fa13c7e8aef204091ad83c1b33b8452582de9b","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-running-upgrade-078958_kube-system_90002065a0378229711bc7c07d28de07_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2025-01-27T02:49:54.591371001Z","io.kubernetes.cri-o.HostName":"running-upgrade-078958","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/f5c818db
aa9b364c4f0dc32bf3fa13c7e8aef204091ad83c1b33b8452582de9b/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-scheduler-running-upgrade-078958","io.kubernetes.cri-o.Labels":"{\"component\":\"kube-scheduler\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"90002065a0378229711bc7c07d28de07\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-scheduler-running-upgrade-078958\",\"tier\":\"control-plane\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-running-upgrade-078958_90002065a0378229711bc7c07d28de07/f5c818dbaa9b364c4f0dc32bf3fa13c7e8aef204091ad83c1b33b8452582de9b.log","io.kubernetes.cri-o.Metadata":"{\"Name\":\"kube-scheduler-running-upgrade-078958\",\"UID\":\"90002065a0378229711bc7c07d28de07\",\"Namespace\":\"kube-system\",\"Attempt\":0}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/a69df51a0a5afd3a1644e9dd576301ab5496a991653d68c1594be7d9d009b7ac/merg
ed","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-running-upgrade-078958_kube-system_90002065a0378229711bc7c07d28de07_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/f5c818dbaa9b364c4f0dc32bf3fa13c7e8aef204091ad83c1b33b8452582de9b/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"f5c818dbaa9b364c4f0dc32bf3fa13c7e8aef204091ad83c1b33b8452582de9b","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/f5c818dbaa9b364c4f0dc32bf3fa13c7e8aef204091ad83c1b33b8452582de9b/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-running-upgrade-078958","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"90002065a0378229711bc7c07d28de07","kubernetes
.io/config.hash":"90002065a0378229711bc7c07d28de07","kubernetes.io/config.seen":"2025-01-27T02:49:53.249115491Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fe029dd9ec6645e1c570dbce1f8d08df55760c9ca7ed380cc4c1579176201780","pid":980,"status":"running","bundle":"/run/containers/storage/overlay-containers/fe029dd9ec6645e1c570dbce1f8d08df55760c9ca7ed380cc4c1579176201780/userdata","rootfs":"/var/lib/containers/storage/overlay/0989ec2728a9921e2b9900c2efc976d857753d007a74b492674c47d22ff70fb7/merged","created":"2025-01-27T02:49:54.804216011Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"2b90f71684ea3d808f7d6100624bc03d\",\"kubernetes.io/config.seen\":\"2025-01-27T02:49:53.249114461Z\"}","io.kubernetes.cri-o.Cgroup
Parent":"kubepods-burstable-pod2b90f71684ea3d808f7d6100624bc03d.slice","io.kubernetes.cri-o.ContainerID":"fe029dd9ec6645e1c570dbce1f8d08df55760c9ca7ed380cc4c1579176201780","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-running-upgrade-078958_kube-system_2b90f71684ea3d808f7d6100624bc03d_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2025-01-27T02:49:54.628754741Z","io.kubernetes.cri-o.HostName":"running-upgrade-078958","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/fe029dd9ec6645e1c570dbce1f8d08df55760c9ca7ed380cc4c1579176201780/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-controller-manager-running-upgrade-078958","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.name\":\"kube-controller-manager-running-upgrade-078958\",\"tier\":\"control-plane\",\"component\":\"kube-controller-manager\",\"io.kubernetes.container.name
\":\"POD\",\"io.kubernetes.pod.uid\":\"2b90f71684ea3d808f7d6100624bc03d\",\"io.kubernetes.pod.namespace\":\"kube-system\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-running-upgrade-078958_2b90f71684ea3d808f7d6100624bc03d/fe029dd9ec6645e1c570dbce1f8d08df55760c9ca7ed380cc4c1579176201780.log","io.kubernetes.cri-o.Metadata":"{\"Name\":\"kube-controller-manager-running-upgrade-078958\",\"UID\":\"2b90f71684ea3d808f7d6100624bc03d\",\"Namespace\":\"kube-system\",\"Attempt\":0}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/0989ec2728a9921e2b9900c2efc976d857753d007a74b492674c47d22ff70fb7/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-running-upgrade-078958_kube-system_2b90f71684ea3d808f7d6100624bc03d_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/
run/containers/storage/overlay-containers/fe029dd9ec6645e1c570dbce1f8d08df55760c9ca7ed380cc4c1579176201780/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"fe029dd9ec6645e1c570dbce1f8d08df55760c9ca7ed380cc4c1579176201780","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/fe029dd9ec6645e1c570dbce1f8d08df55760c9ca7ed380cc4c1579176201780/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-running-upgrade-078958","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"2b90f71684ea3d808f7d6100624bc03d","kubernetes.io/config.hash":"2b90f71684ea3d808f7d6100624bc03d","kubernetes.io/config.seen":"2025-01-27T02:49:53.249114461Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"}]
	I0127 02:50:36.458674  939817 cri.go:126] list returned 8 containers
	I0127 02:50:36.458694  939817 cri.go:129] container: {ID:2978847c1472fbd1f55edb9152b02b426763d55b42d6735fd1af38b990794906 Status:running}
	I0127 02:50:36.458728  939817 cri.go:135] skipping {2978847c1472fbd1f55edb9152b02b426763d55b42d6735fd1af38b990794906 running}: state = "running", want "paused"
	I0127 02:50:36.458745  939817 cri.go:129] container: {ID:2f6d9a2c191d6c4b5df4ae47f4282d605578c6983bd404e86efcdd6976766bf8 Status:running}
	I0127 02:50:36.458752  939817 cri.go:135] skipping {2f6d9a2c191d6c4b5df4ae47f4282d605578c6983bd404e86efcdd6976766bf8 running}: state = "running", want "paused"
	I0127 02:50:36.458760  939817 cri.go:129] container: {ID:3fee94a2fa7e4422e88c007f82289b7c12824d207d6dd4c99341c9f7dbd6ab58 Status:running}
	I0127 02:50:36.458769  939817 cri.go:135] skipping {3fee94a2fa7e4422e88c007f82289b7c12824d207d6dd4c99341c9f7dbd6ab58 running}: state = "running", want "paused"
	I0127 02:50:36.458774  939817 cri.go:129] container: {ID:5fb9d22f297310f96fb8c4664cabe5be60ff6d7d1d4bb9a865363f723c040839 Status:running}
	I0127 02:50:36.458781  939817 cri.go:131] skipping 5fb9d22f297310f96fb8c4664cabe5be60ff6d7d1d4bb9a865363f723c040839 - not in ps
	I0127 02:50:36.458814  939817 cri.go:129] container: {ID:63c3964a5eabc3335c39cee8e6e2f152f8d5aefa1430ad891496fa3685d4ba91 Status:running}
	I0127 02:50:36.458829  939817 cri.go:135] skipping {63c3964a5eabc3335c39cee8e6e2f152f8d5aefa1430ad891496fa3685d4ba91 running}: state = "running", want "paused"
	I0127 02:50:36.458836  939817 cri.go:129] container: {ID:9ae1f057d0a7f9f5730f352d0b7272fe308b01e54c2de6ec75b53424bd7ac182 Status:running}
	I0127 02:50:36.458845  939817 cri.go:131] skipping 9ae1f057d0a7f9f5730f352d0b7272fe308b01e54c2de6ec75b53424bd7ac182 - not in ps
	I0127 02:50:36.458852  939817 cri.go:129] container: {ID:f5c818dbaa9b364c4f0dc32bf3fa13c7e8aef204091ad83c1b33b8452582de9b Status:running}
	I0127 02:50:36.458858  939817 cri.go:131] skipping f5c818dbaa9b364c4f0dc32bf3fa13c7e8aef204091ad83c1b33b8452582de9b - not in ps
	I0127 02:50:36.458864  939817 cri.go:129] container: {ID:fe029dd9ec6645e1c570dbce1f8d08df55760c9ca7ed380cc4c1579176201780 Status:running}
	I0127 02:50:36.458878  939817 cri.go:131] skipping fe029dd9ec6645e1c570dbce1f8d08df55760c9ca7ed380cc4c1579176201780 - not in ps
	I0127 02:50:36.458934  939817 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0127 02:50:36.467489  939817 kubeadm.go:405] apiserver tunnel failed: apiserver port not set
	I0127 02:50:36.467518  939817 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 02:50:36.467525  939817 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 02:50:36.467582  939817 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 02:50:36.474914  939817 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 02:50:36.475554  939817 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-078958" does not appear in /home/jenkins/minikube-integration/20316-897624/kubeconfig
	I0127 02:50:36.475799  939817 kubeconfig.go:62] /home/jenkins/minikube-integration/20316-897624/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-078958" cluster setting kubeconfig missing "running-upgrade-078958" context setting]
	I0127 02:50:36.476216  939817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/kubeconfig: {Name:mkcd87672341028e989830590061bc5270733978 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:50:36.477114  939817 kapi.go:59] client config for running-upgrade-078958: &rest.Config{Host:"https://192.168.39.156:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20316-897624/.minikube/profiles/running-upgrade-078958/client.crt", KeyFile:"/home/jenkins/minikube-integration/20316-897624/.minikube/profiles/running-upgrade-078958/client.key", CAFile:"/home/jenkins/minikube-integration/20316-897624/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c3e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0127 02:50:36.477803  939817 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 02:50:36.485206  939817 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/crio/crio.sock
	+  criSocket: unix:///var/run/crio/crio.sock
	   name: "running-upgrade-078958"
	   kubeletExtraArgs:
	     node-ip: 192.168.39.156
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0127 02:50:36.485228  939817 kubeadm.go:1160] stopping kube-system containers ...
	I0127 02:50:36.485244  939817 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0127 02:50:36.485295  939817 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 02:50:36.513719  939817 cri.go:89] found id: "3fee94a2fa7e4422e88c007f82289b7c12824d207d6dd4c99341c9f7dbd6ab58"
	I0127 02:50:36.513744  939817 cri.go:89] found id: "2f6d9a2c191d6c4b5df4ae47f4282d605578c6983bd404e86efcdd6976766bf8"
	I0127 02:50:36.513748  939817 cri.go:89] found id: "2978847c1472fbd1f55edb9152b02b426763d55b42d6735fd1af38b990794906"
	I0127 02:50:36.513751  939817 cri.go:89] found id: "63c3964a5eabc3335c39cee8e6e2f152f8d5aefa1430ad891496fa3685d4ba91"
	I0127 02:50:36.513754  939817 cri.go:89] found id: ""
	I0127 02:50:36.513759  939817 cri.go:252] Stopping containers: [3fee94a2fa7e4422e88c007f82289b7c12824d207d6dd4c99341c9f7dbd6ab58 2f6d9a2c191d6c4b5df4ae47f4282d605578c6983bd404e86efcdd6976766bf8 2978847c1472fbd1f55edb9152b02b426763d55b42d6735fd1af38b990794906 63c3964a5eabc3335c39cee8e6e2f152f8d5aefa1430ad891496fa3685d4ba91]
	I0127 02:50:36.513826  939817 ssh_runner.go:195] Run: which crictl
	I0127 02:50:36.517200  939817 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 3fee94a2fa7e4422e88c007f82289b7c12824d207d6dd4c99341c9f7dbd6ab58 2f6d9a2c191d6c4b5df4ae47f4282d605578c6983bd404e86efcdd6976766bf8 2978847c1472fbd1f55edb9152b02b426763d55b42d6735fd1af38b990794906 63c3964a5eabc3335c39cee8e6e2f152f8d5aefa1430ad891496fa3685d4ba91
	I0127 02:50:36.777420  939626 pod_ready.go:93] pod "kube-apiserver-pause-622238" in "kube-system" namespace has status "Ready":"True"
	I0127 02:50:36.777444  939626 pod_ready.go:82] duration metric: took 1.005719886s for pod "kube-apiserver-pause-622238" in "kube-system" namespace to be "Ready" ...
	I0127 02:50:36.777453  939626 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-pause-622238" in "kube-system" namespace to be "Ready" ...
	I0127 02:50:38.284567  939626 pod_ready.go:93] pod "kube-controller-manager-pause-622238" in "kube-system" namespace has status "Ready":"True"
	I0127 02:50:38.284605  939626 pod_ready.go:82] duration metric: took 1.507143563s for pod "kube-controller-manager-pause-622238" in "kube-system" namespace to be "Ready" ...
	I0127 02:50:38.284621  939626 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-9pg5p" in "kube-system" namespace to be "Ready" ...
	I0127 02:50:38.291489  939626 pod_ready.go:93] pod "kube-proxy-9pg5p" in "kube-system" namespace has status "Ready":"True"
	I0127 02:50:38.291522  939626 pod_ready.go:82] duration metric: took 6.891907ms for pod "kube-proxy-9pg5p" in "kube-system" namespace to be "Ready" ...
	I0127 02:50:38.291536  939626 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-pause-622238" in "kube-system" namespace to be "Ready" ...
	I0127 02:50:38.797432  939626 pod_ready.go:93] pod "kube-scheduler-pause-622238" in "kube-system" namespace has status "Ready":"True"
	I0127 02:50:38.797459  939626 pod_ready.go:82] duration metric: took 505.914331ms for pod "kube-scheduler-pause-622238" in "kube-system" namespace to be "Ready" ...
	I0127 02:50:38.797467  939626 pod_ready.go:39] duration metric: took 11.090421806s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 02:50:38.797486  939626 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 02:50:38.809482  939626 ops.go:34] apiserver oom_adj: -16
	I0127 02:50:38.809503  939626 kubeadm.go:597] duration metric: took 19.769699245s to restartPrimaryControlPlane
	I0127 02:50:38.809512  939626 kubeadm.go:394] duration metric: took 19.871243976s to StartCluster
	I0127 02:50:38.809532  939626 settings.go:142] acquiring lock: {Name:mk329e04da29656a59534015ea2d4cccfe5debac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:50:38.809608  939626 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20316-897624/kubeconfig
	I0127 02:50:38.810379  939626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/kubeconfig: {Name:mkcd87672341028e989830590061bc5270733978 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:50:38.810611  939626 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.58 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 02:50:38.810692  939626 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 02:50:38.810908  939626 config.go:182] Loaded profile config "pause-622238": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 02:50:38.812090  939626 out.go:177] * Verifying Kubernetes components...
	I0127 02:50:38.812797  939626 out.go:177] * Enabled addons: 
	I0127 02:50:36.572853  939995 main.go:141] libmachine: (NoKubernetes-954952) DBG | domain NoKubernetes-954952 has defined MAC address 52:54:00:f8:f3:cb in network mk-NoKubernetes-954952
	I0127 02:50:36.573505  939995 main.go:141] libmachine: (NoKubernetes-954952) DBG | unable to find current IP address of domain NoKubernetes-954952 in network mk-NoKubernetes-954952
	I0127 02:50:36.573556  939995 main.go:141] libmachine: (NoKubernetes-954952) DBG | I0127 02:50:36.573490  940039 retry.go:31] will retry after 4.167241164s: waiting for domain to come up
	I0127 02:50:40.746019  939995 main.go:141] libmachine: (NoKubernetes-954952) DBG | domain NoKubernetes-954952 has defined MAC address 52:54:00:f8:f3:cb in network mk-NoKubernetes-954952
	I0127 02:50:40.746503  939995 main.go:141] libmachine: (NoKubernetes-954952) found domain IP: 192.168.61.132
	I0127 02:50:40.746525  939995 main.go:141] libmachine: (NoKubernetes-954952) DBG | domain NoKubernetes-954952 has current primary IP address 192.168.61.132 and MAC address 52:54:00:f8:f3:cb in network mk-NoKubernetes-954952
	I0127 02:50:40.746532  939995 main.go:141] libmachine: (NoKubernetes-954952) reserving static IP address...
	I0127 02:50:40.746880  939995 main.go:141] libmachine: (NoKubernetes-954952) DBG | unable to find host DHCP lease matching {name: "NoKubernetes-954952", mac: "52:54:00:f8:f3:cb", ip: "192.168.61.132"} in network mk-NoKubernetes-954952
	I0127 02:50:38.813525  939626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 02:50:38.814082  939626 addons.go:514] duration metric: took 3.40522ms for enable addons: enabled=[]
	I0127 02:50:38.959905  939626 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 02:50:38.976352  939626 node_ready.go:35] waiting up to 6m0s for node "pause-622238" to be "Ready" ...
	I0127 02:50:38.979219  939626 node_ready.go:49] node "pause-622238" has status "Ready":"True"
	I0127 02:50:38.979241  939626 node_ready.go:38] duration metric: took 2.837588ms for node "pause-622238" to be "Ready" ...
	I0127 02:50:38.979253  939626 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 02:50:38.984174  939626 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-f22h8" in "kube-system" namespace to be "Ready" ...
	I0127 02:50:38.988546  939626 pod_ready.go:93] pod "coredns-668d6bf9bc-f22h8" in "kube-system" namespace has status "Ready":"True"
	I0127 02:50:38.988569  939626 pod_ready.go:82] duration metric: took 4.366573ms for pod "coredns-668d6bf9bc-f22h8" in "kube-system" namespace to be "Ready" ...
	I0127 02:50:38.988580  939626 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-622238" in "kube-system" namespace to be "Ready" ...
	I0127 02:50:39.368411  939626 pod_ready.go:93] pod "etcd-pause-622238" in "kube-system" namespace has status "Ready":"True"
	I0127 02:50:39.368448  939626 pod_ready.go:82] duration metric: took 379.85883ms for pod "etcd-pause-622238" in "kube-system" namespace to be "Ready" ...
	I0127 02:50:39.368463  939626 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-622238" in "kube-system" namespace to be "Ready" ...
	I0127 02:50:39.768838  939626 pod_ready.go:93] pod "kube-apiserver-pause-622238" in "kube-system" namespace has status "Ready":"True"
	I0127 02:50:39.768887  939626 pod_ready.go:82] duration metric: took 400.414809ms for pod "kube-apiserver-pause-622238" in "kube-system" namespace to be "Ready" ...
	I0127 02:50:39.768905  939626 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-622238" in "kube-system" namespace to be "Ready" ...
	I0127 02:50:40.168617  939626 pod_ready.go:93] pod "kube-controller-manager-pause-622238" in "kube-system" namespace has status "Ready":"True"
	I0127 02:50:40.168647  939626 pod_ready.go:82] duration metric: took 399.732256ms for pod "kube-controller-manager-pause-622238" in "kube-system" namespace to be "Ready" ...
	I0127 02:50:40.168660  939626 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9pg5p" in "kube-system" namespace to be "Ready" ...
	I0127 02:50:40.569146  939626 pod_ready.go:93] pod "kube-proxy-9pg5p" in "kube-system" namespace has status "Ready":"True"
	I0127 02:50:40.569177  939626 pod_ready.go:82] duration metric: took 400.507282ms for pod "kube-proxy-9pg5p" in "kube-system" namespace to be "Ready" ...
	I0127 02:50:40.569191  939626 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-622238" in "kube-system" namespace to be "Ready" ...
	W0127 02:50:36.708916  939817 kubeadm.go:644] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 3fee94a2fa7e4422e88c007f82289b7c12824d207d6dd4c99341c9f7dbd6ab58 2f6d9a2c191d6c4b5df4ae47f4282d605578c6983bd404e86efcdd6976766bf8 2978847c1472fbd1f55edb9152b02b426763d55b42d6735fd1af38b990794906 63c3964a5eabc3335c39cee8e6e2f152f8d5aefa1430ad891496fa3685d4ba91: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-01-27T02:50:36Z" level=fatal msg="stopping the container \"3fee94a2fa7e4422e88c007f82289b7c12824d207d6dd4c99341c9f7dbd6ab58\": rpc error: code = Unknown desc = failed to unmount container 3fee94a2fa7e4422e88c007f82289b7c12824d207d6dd4c99341c9f7dbd6ab58: layer not known"
	I0127 02:50:36.709017  939817 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 02:50:36.738557  939817 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 02:50:36.747193  939817 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Jan 27 02:49 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5654 Jan 27 02:49 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Jan 27 02:50 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 Jan 27 02:49 /etc/kubernetes/scheduler.conf
	
	I0127 02:50:36.747273  939817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/admin.conf
	I0127 02:50:36.754560  939817 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0127 02:50:36.754632  939817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 02:50:36.763607  939817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/kubelet.conf
	I0127 02:50:36.772262  939817 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0127 02:50:36.772334  939817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 02:50:36.781343  939817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/controller-manager.conf
	I0127 02:50:36.788670  939817 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0127 02:50:36.788724  939817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 02:50:36.795927  939817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/scheduler.conf
	I0127 02:50:36.803058  939817 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0127 02:50:36.803111  939817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 02:50:36.811040  939817 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 02:50:36.818693  939817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 02:50:36.906058  939817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 02:50:37.482822  939817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 02:50:37.847115  939817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 02:50:37.908289  939817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 02:50:38.022247  939817 api_server.go:52] waiting for apiserver process to appear ...
	I0127 02:50:38.022341  939817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 02:50:38.044489  939817 api_server.go:72] duration metric: took 22.240139ms to wait for apiserver process to appear ...
	I0127 02:50:38.044519  939817 api_server.go:88] waiting for apiserver healthz status ...
	I0127 02:50:38.044541  939817 api_server.go:253] Checking apiserver healthz at https://192.168.39.156:8443/healthz ...
	I0127 02:50:40.054649  939817 api_server.go:279] https://192.168.39.156:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 02:50:40.054695  939817 api_server.go:103] status: https://192.168.39.156:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 02:50:40.054717  939817 api_server.go:253] Checking apiserver healthz at https://192.168.39.156:8443/healthz ...
	I0127 02:50:40.968553  939626 pod_ready.go:93] pod "kube-scheduler-pause-622238" in "kube-system" namespace has status "Ready":"True"
	I0127 02:50:40.968578  939626 pod_ready.go:82] duration metric: took 399.378375ms for pod "kube-scheduler-pause-622238" in "kube-system" namespace to be "Ready" ...
	I0127 02:50:40.968587  939626 pod_ready.go:39] duration metric: took 1.989321738s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 02:50:40.968602  939626 api_server.go:52] waiting for apiserver process to appear ...
	I0127 02:50:40.968663  939626 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 02:50:40.982963  939626 api_server.go:72] duration metric: took 2.172313148s to wait for apiserver process to appear ...
	I0127 02:50:40.982991  939626 api_server.go:88] waiting for apiserver healthz status ...
	I0127 02:50:40.983011  939626 api_server.go:253] Checking apiserver healthz at https://192.168.50.58:8443/healthz ...
	I0127 02:50:40.988340  939626 api_server.go:279] https://192.168.50.58:8443/healthz returned 200:
	ok
	I0127 02:50:40.989372  939626 api_server.go:141] control plane version: v1.32.1
	I0127 02:50:40.989397  939626 api_server.go:131] duration metric: took 6.396298ms to wait for apiserver health ...
	I0127 02:50:40.989406  939626 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 02:50:41.170415  939626 system_pods.go:59] 6 kube-system pods found
	I0127 02:50:41.170447  939626 system_pods.go:61] "coredns-668d6bf9bc-f22h8" [8dad4f16-9af8-4d93-9cf5-232c0a0935a0] Running
	I0127 02:50:41.170452  939626 system_pods.go:61] "etcd-pause-622238" [946278f4-86ff-43c3-b930-a9a0d1214046] Running
	I0127 02:50:41.170456  939626 system_pods.go:61] "kube-apiserver-pause-622238" [fc330ca3-25d8-4cb9-ae4e-35832b066331] Running
	I0127 02:50:41.170459  939626 system_pods.go:61] "kube-controller-manager-pause-622238" [f7e24926-6b27-4133-b1ea-967e10c0efab] Running
	I0127 02:50:41.170463  939626 system_pods.go:61] "kube-proxy-9pg5p" [b532db91-62b9-4bee-bbc9-1613f5989325] Running
	I0127 02:50:41.170466  939626 system_pods.go:61] "kube-scheduler-pause-622238" [a850b25b-fc22-4574-a131-70861f2c285a] Running
	I0127 02:50:41.170473  939626 system_pods.go:74] duration metric: took 181.060834ms to wait for pod list to return data ...
	I0127 02:50:41.170481  939626 default_sa.go:34] waiting for default service account to be created ...
	I0127 02:50:41.368731  939626 default_sa.go:45] found service account: "default"
	I0127 02:50:41.368768  939626 default_sa.go:55] duration metric: took 198.280596ms for default service account to be created ...
	I0127 02:50:41.368782  939626 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 02:50:41.570032  939626 system_pods.go:87] 6 kube-system pods found
	I0127 02:50:41.768670  939626 system_pods.go:105] "coredns-668d6bf9bc-f22h8" [8dad4f16-9af8-4d93-9cf5-232c0a0935a0] Running
	I0127 02:50:41.768695  939626 system_pods.go:105] "etcd-pause-622238" [946278f4-86ff-43c3-b930-a9a0d1214046] Running
	I0127 02:50:41.768702  939626 system_pods.go:105] "kube-apiserver-pause-622238" [fc330ca3-25d8-4cb9-ae4e-35832b066331] Running
	I0127 02:50:41.768709  939626 system_pods.go:105] "kube-controller-manager-pause-622238" [f7e24926-6b27-4133-b1ea-967e10c0efab] Running
	I0127 02:50:41.768715  939626 system_pods.go:105] "kube-proxy-9pg5p" [b532db91-62b9-4bee-bbc9-1613f5989325] Running
	I0127 02:50:41.768722  939626 system_pods.go:105] "kube-scheduler-pause-622238" [a850b25b-fc22-4574-a131-70861f2c285a] Running
	I0127 02:50:41.768731  939626 system_pods.go:147] duration metric: took 399.941634ms to wait for k8s-apps to be running ...
	I0127 02:50:41.768740  939626 system_svc.go:44] waiting for kubelet service to be running ....
	I0127 02:50:41.768800  939626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 02:50:41.783092  939626 system_svc.go:56] duration metric: took 14.339392ms WaitForService to wait for kubelet
	I0127 02:50:41.783130  939626 kubeadm.go:582] duration metric: took 2.97248684s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 02:50:41.783157  939626 node_conditions.go:102] verifying NodePressure condition ...
	I0127 02:50:41.968915  939626 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 02:50:41.968960  939626 node_conditions.go:123] node cpu capacity is 2
	I0127 02:50:41.968973  939626 node_conditions.go:105] duration metric: took 185.810353ms to run NodePressure ...
	I0127 02:50:41.968985  939626 start.go:241] waiting for startup goroutines ...
	I0127 02:50:41.968992  939626 start.go:246] waiting for cluster config update ...
	I0127 02:50:41.968999  939626 start.go:255] writing updated cluster config ...
	I0127 02:50:41.969296  939626 ssh_runner.go:195] Run: rm -f paused
	I0127 02:50:42.019528  939626 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 02:50:42.021172  939626 out.go:177] * Done! kubectl is now configured to use "pause-622238" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 27 02:50:44 pause-622238 crio[2912]: time="2025-01-27 02:50:44.528192925Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:d2fb98eb658ae19ffbd127d5cb1e6319aca30b95a485a3a4e3a3919af9061dc9,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-f22h8,Uid:8dad4f16-9af8-4d93-9cf5-232c0a0935a0,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1737946217599117606,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-f22h8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dad4f16-9af8-4d93-9cf5-232c0a0935a0,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-27T02:49:10.556384936Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:755a82eacc889b0e328e7a75614c5e4c3f1d127d4796daaa85f72b8bdb41b520,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-622238,Uid:d437827597bc003de56f7a6187cae3a5,Namespace:kub
e-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1737946217334414116,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d437827597bc003de56f7a6187cae3a5,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d437827597bc003de56f7a6187cae3a5,kubernetes.io/config.seen: 2025-01-27T02:49:05.559536036Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:27d12b4cc7ec869ed20a79b3f74adea03efa326fdf53913d7c405d65e82a11cc,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-622238,Uid:c55da833af33a845ed3f4bcc5624da47,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1737946217311799886,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c55da833af33a
845ed3f4bcc5624da47,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c55da833af33a845ed3f4bcc5624da47,kubernetes.io/config.seen: 2025-01-27T02:49:05.559536871Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a35453f75adbb90b0a9fbc30cfe45a92c23569aeaef3de45b9519afe4a6324e4,Metadata:&PodSandboxMetadata{Name:etcd-pause-622238,Uid:06d52062ced041c1b7c3c9bf4475a5b1,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1737946217300091505,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06d52062ced041c1b7c3c9bf4475a5b1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.58:2379,kubernetes.io/config.hash: 06d52062ced041c1b7c3c9bf4475a5b1,kubernetes.io/config.seen: 2025-01-27T02:49:05.559530274Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{
Id:1ffeec06b0153cfa9999e512843a074b2d6534e439f7a45479cec9f5414895ac,Metadata:&PodSandboxMetadata{Name:kube-proxy-9pg5p,Uid:b532db91-62b9-4bee-bbc9-1613f5989325,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1737946217297237669,Labels:map[string]string{controller-revision-hash: 566d7b9f85,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-9pg5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b532db91-62b9-4bee-bbc9-1613f5989325,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-27T02:49:10.335040724Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b6b7226c8ff780b6cdafaf2d15c402103b5f97f4a48acec815817351f761c3f7,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-622238,Uid:e736acc6ecf8b13aad639a8cb1f8bd63,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1737946217213121422,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.k
ubernetes.pod.name: kube-apiserver-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e736acc6ecf8b13aad639a8cb1f8bd63,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.58:8443,kubernetes.io/config.hash: e736acc6ecf8b13aad639a8cb1f8bd63,kubernetes.io/config.seen: 2025-01-27T02:49:05.559534805Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:490f4f6c968c5cd52220544ac5fac788cea22eff14c7f0b3839456731e778e28,Metadata:&PodSandboxMetadata{Name:kube-proxy-9pg5p,Uid:b532db91-62b9-4bee-bbc9-1613f5989325,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1737946215104862400,Labels:map[string]string{controller-revision-hash: 566d7b9f85,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-9pg5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b532db91-62b9-4bee-bbc9-1613f5989325,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]st
ring{kubernetes.io/config.seen: 2025-01-27T02:49:10.335040724Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1f1e9f60ffcf2e9b104554631fc4da65c2fc13ca9d6ee88ea4ccb9cbfa699758,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-622238,Uid:e736acc6ecf8b13aad639a8cb1f8bd63,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1737946215061575363,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e736acc6ecf8b13aad639a8cb1f8bd63,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.58:8443,kubernetes.io/config.hash: e736acc6ecf8b13aad639a8cb1f8bd63,kubernetes.io/config.seen: 2025-01-27T02:49:05.559534805Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bf5215f0aedd701090cfe09782f59164141f7067082adcb076c382b265f92c2a,Metadata:&PodSandbox
Metadata{Name:kube-controller-manager-pause-622238,Uid:d437827597bc003de56f7a6187cae3a5,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1737946215030086396,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d437827597bc003de56f7a6187cae3a5,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d437827597bc003de56f7a6187cae3a5,kubernetes.io/config.seen: 2025-01-27T02:49:05.559536036Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b2b582d5d4f3a5f7d33c84868be88ac6d9fa7c92542626a39a4b583d13f3f2d1,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-622238,Uid:c55da833af33a845ed3f4bcc5624da47,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1737946214996407984,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name
: kube-scheduler-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c55da833af33a845ed3f4bcc5624da47,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c55da833af33a845ed3f4bcc5624da47,kubernetes.io/config.seen: 2025-01-27T02:49:05.559536871Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8ec628c2a5c8bed1a6f7ffd9286f75e493f5ccb71814e9a4ee6da4f782d8300f,Metadata:&PodSandboxMetadata{Name:etcd-pause-622238,Uid:06d52062ced041c1b7c3c9bf4475a5b1,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1737946214946019087,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06d52062ced041c1b7c3c9bf4475a5b1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.58:2379,kubernetes.io/config.hash: 06d52062ced041c1b7c3c9bf4475a5b1,kubernetes.io/
config.seen: 2025-01-27T02:49:05.559530274Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ab2780aabca04b216184097a282c72ef64c686ec523d744186d4ac3c5b69e493,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-f22h8,Uid:8dad4f16-9af8-4d93-9cf5-232c0a0935a0,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1737946150866025655,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-f22h8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dad4f16-9af8-4d93-9cf5-232c0a0935a0,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-27T02:49:10.556384936Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=ab214951-59a3-413c-bbc8-9e9625d754fe name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 27 02:50:44 pause-622238 crio[2912]: time="2025-01-27 02:50:44.528897251Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6b6caa0a-6293-4fd7-977f-45ffb934b728 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 02:50:44 pause-622238 crio[2912]: time="2025-01-27 02:50:44.528948472Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6b6caa0a-6293-4fd7-977f-45ffb934b728 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 02:50:44 pause-622238 crio[2912]: time="2025-01-27 02:50:44.529176611Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a8134cd7d02fa6732b6a82499a19cd8e722326cd3d251615c779006744788733,PodSandboxId:1ffeec06b0153cfa9999e512843a074b2d6534e439f7a45479cec9f5414895ac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737946226431726734,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9pg5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b532db91-62b9-4bee-bbc9-1613f5989325,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21c957636d37d8c0760bf8d712101ad72bf0a6bda19423de8053e78d2285f11d,PodSandboxId:d2fb98eb658ae19ffbd127d5cb1e6319aca30b95a485a3a4e3a3919af9061dc9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737946226406205568,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-f22h8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dad4f16-9af8-4d93-9cf5-232c0a0935a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3ce858dc5535eb7b195ff30036e1f1f5b6fdf5698b50df27ef513cc82e34256,PodSandboxId:755a82eacc889b0e328e7a75614c5e4c3f1d127d4796daaa85f72b8bdb41b520,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737946221671634862,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: d437827597bc003de56f7a6187cae3a5,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:437dd26112fe8d6fefdc85356c026c96984ad2fff136a2a4327d658f9442e464,PodSandboxId:b6b7226c8ff780b6cdafaf2d15c402103b5f97f4a48acec815817351f761c3f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737946221681890230,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7
36acc6ecf8b13aad639a8cb1f8bd63,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c45f3cea74699716fd13b9d40fc91d0b27d5e1cc5cfcca8cc9e1c5dac734fd53,PodSandboxId:27d12b4cc7ec869ed20a79b3f74adea03efa326fdf53913d7c405d65e82a11cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737946221558217377,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c55da833af33a845ed3f
4bcc5624da47,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c5b6075275813cb01bde4bf7366b0d6ffa9336e295c09121b9f3cd66671721,PodSandboxId:a35453f75adbb90b0a9fbc30cfe45a92c23569aeaef3de45b9519afe4a6324e4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737946221563863068,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06d52062ced041c1b7c3c9bf4475a5b1,},Annotations:map[string]string{io.
kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:825a3f8f6a885295c9c9e9439f12deaad49e21c747b3809d5cff027c4b9e1819,PodSandboxId:490f4f6c968c5cd52220544ac5fac788cea22eff14c7f0b3839456731e778e28,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1737946215566892997,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9pg5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b532db91-62b9-4bee-bbc9-1613f5989325,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b29
3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:723b4e88e62109878bd5c6cf65e85c07583b1be9a253ed0436768f103dcf7b56,PodSandboxId:bf5215f0aedd701090cfe09782f59164141f7067082adcb076c382b265f92c2a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1737946215632820908,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d437827597bc003de56f7a6187cae3a5,},Annotations:map[string]string{io.kubernetes.container.hash:
16baf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcab89d41125a8414b58de1d77550e1159a49e4d71ff4369d142f7e3501b9229,PodSandboxId:b2b582d5d4f3a5f7d33c84868be88ac6d9fa7c92542626a39a4b583d13f3f2d1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1737946215628464124,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c55da833af33a845ed3f4bcc5624da47,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d62e04f6f5e13451192cf7b3275a99d64a2c9696cf576a5ee12858cfa3dc94c3,PodSandboxId:1f1e9f60ffcf2e9b104554631fc4da65c2fc13ca9d6ee88ea4ccb9cbfa699758,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737946215496141284,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e736acc6ecf8b13aad639a8cb1f8bd63,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.resta
rtCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90e530f8899a063641c0232d3a8d81ba60eab63da53061aa8f6e54d35a989686,PodSandboxId:8ec628c2a5c8bed1a6f7ffd9286f75e493f5ccb71814e9a4ee6da4f782d8300f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1737946215400445438,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06d52062ced041c1b7c3c9bf4475a5b1,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe790ab890a4916fdd3f58edb0503dd303a6e1cf28eb9c753b63fc4ecbb7ddeb,PodSandboxId:ab2780aabca04b216184097a282c72ef64c686ec523d744186d4ac3c5b69e493,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1737946151284840755,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-f22h8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dad4f16-9af8-4d93-9cf5-232c0a0935a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6b6caa0a-6293-4fd7-977f-45ffb934b728 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 02:50:44 pause-622238 crio[2912]: time="2025-01-27 02:50:44.562267547Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2cd644ef-6302-4a13-991f-3aaa93e2a308 name=/runtime.v1.RuntimeService/Version
	Jan 27 02:50:44 pause-622238 crio[2912]: time="2025-01-27 02:50:44.562458482Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2cd644ef-6302-4a13-991f-3aaa93e2a308 name=/runtime.v1.RuntimeService/Version
	Jan 27 02:50:44 pause-622238 crio[2912]: time="2025-01-27 02:50:44.564217357Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4de4f009-cb08-49bb-82a1-2e7ac4cd4648 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 02:50:44 pause-622238 crio[2912]: time="2025-01-27 02:50:44.564793171Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737946244564767107,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4de4f009-cb08-49bb-82a1-2e7ac4cd4648 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 02:50:44 pause-622238 crio[2912]: time="2025-01-27 02:50:44.565531866Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6b5d02e5-7b1d-49c7-ba9a-323bede15c9e name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 02:50:44 pause-622238 crio[2912]: time="2025-01-27 02:50:44.565585579Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6b5d02e5-7b1d-49c7-ba9a-323bede15c9e name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 02:50:44 pause-622238 crio[2912]: time="2025-01-27 02:50:44.565849643Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a8134cd7d02fa6732b6a82499a19cd8e722326cd3d251615c779006744788733,PodSandboxId:1ffeec06b0153cfa9999e512843a074b2d6534e439f7a45479cec9f5414895ac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737946226431726734,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9pg5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b532db91-62b9-4bee-bbc9-1613f5989325,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21c957636d37d8c0760bf8d712101ad72bf0a6bda19423de8053e78d2285f11d,PodSandboxId:d2fb98eb658ae19ffbd127d5cb1e6319aca30b95a485a3a4e3a3919af9061dc9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737946226406205568,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-f22h8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dad4f16-9af8-4d93-9cf5-232c0a0935a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3ce858dc5535eb7b195ff30036e1f1f5b6fdf5698b50df27ef513cc82e34256,PodSandboxId:755a82eacc889b0e328e7a75614c5e4c3f1d127d4796daaa85f72b8bdb41b520,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737946221671634862,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: d437827597bc003de56f7a6187cae3a5,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:437dd26112fe8d6fefdc85356c026c96984ad2fff136a2a4327d658f9442e464,PodSandboxId:b6b7226c8ff780b6cdafaf2d15c402103b5f97f4a48acec815817351f761c3f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737946221681890230,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7
36acc6ecf8b13aad639a8cb1f8bd63,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c45f3cea74699716fd13b9d40fc91d0b27d5e1cc5cfcca8cc9e1c5dac734fd53,PodSandboxId:27d12b4cc7ec869ed20a79b3f74adea03efa326fdf53913d7c405d65e82a11cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737946221558217377,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c55da833af33a845ed3f
4bcc5624da47,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c5b6075275813cb01bde4bf7366b0d6ffa9336e295c09121b9f3cd66671721,PodSandboxId:a35453f75adbb90b0a9fbc30cfe45a92c23569aeaef3de45b9519afe4a6324e4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737946221563863068,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06d52062ced041c1b7c3c9bf4475a5b1,},Annotations:map[string]string{io.
kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:825a3f8f6a885295c9c9e9439f12deaad49e21c747b3809d5cff027c4b9e1819,PodSandboxId:490f4f6c968c5cd52220544ac5fac788cea22eff14c7f0b3839456731e778e28,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1737946215566892997,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9pg5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b532db91-62b9-4bee-bbc9-1613f5989325,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b29
3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:723b4e88e62109878bd5c6cf65e85c07583b1be9a253ed0436768f103dcf7b56,PodSandboxId:bf5215f0aedd701090cfe09782f59164141f7067082adcb076c382b265f92c2a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1737946215632820908,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d437827597bc003de56f7a6187cae3a5,},Annotations:map[string]string{io.kubernetes.container.hash:
16baf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcab89d41125a8414b58de1d77550e1159a49e4d71ff4369d142f7e3501b9229,PodSandboxId:b2b582d5d4f3a5f7d33c84868be88ac6d9fa7c92542626a39a4b583d13f3f2d1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1737946215628464124,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c55da833af33a845ed3f4bcc5624da47,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d62e04f6f5e13451192cf7b3275a99d64a2c9696cf576a5ee12858cfa3dc94c3,PodSandboxId:1f1e9f60ffcf2e9b104554631fc4da65c2fc13ca9d6ee88ea4ccb9cbfa699758,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737946215496141284,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e736acc6ecf8b13aad639a8cb1f8bd63,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.resta
rtCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90e530f8899a063641c0232d3a8d81ba60eab63da53061aa8f6e54d35a989686,PodSandboxId:8ec628c2a5c8bed1a6f7ffd9286f75e493f5ccb71814e9a4ee6da4f782d8300f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1737946215400445438,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06d52062ced041c1b7c3c9bf4475a5b1,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe790ab890a4916fdd3f58edb0503dd303a6e1cf28eb9c753b63fc4ecbb7ddeb,PodSandboxId:ab2780aabca04b216184097a282c72ef64c686ec523d744186d4ac3c5b69e493,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1737946151284840755,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-f22h8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dad4f16-9af8-4d93-9cf5-232c0a0935a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6b5d02e5-7b1d-49c7-ba9a-323bede15c9e name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 02:50:44 pause-622238 crio[2912]: time="2025-01-27 02:50:44.607352033Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7d5ec57d-98a5-444c-8f5f-09a36fabc34a name=/runtime.v1.RuntimeService/Version
	Jan 27 02:50:44 pause-622238 crio[2912]: time="2025-01-27 02:50:44.607424613Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7d5ec57d-98a5-444c-8f5f-09a36fabc34a name=/runtime.v1.RuntimeService/Version
	Jan 27 02:50:44 pause-622238 crio[2912]: time="2025-01-27 02:50:44.608545190Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f29419e2-c340-49c1-aec5-588efed50e48 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 02:50:44 pause-622238 crio[2912]: time="2025-01-27 02:50:44.609003581Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737946244608966750,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f29419e2-c340-49c1-aec5-588efed50e48 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 02:50:44 pause-622238 crio[2912]: time="2025-01-27 02:50:44.609549881Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f361c27f-84d1-4565-85ee-44c3f2718c72 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 02:50:44 pause-622238 crio[2912]: time="2025-01-27 02:50:44.609619108Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f361c27f-84d1-4565-85ee-44c3f2718c72 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 02:50:44 pause-622238 crio[2912]: time="2025-01-27 02:50:44.610512611Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a8134cd7d02fa6732b6a82499a19cd8e722326cd3d251615c779006744788733,PodSandboxId:1ffeec06b0153cfa9999e512843a074b2d6534e439f7a45479cec9f5414895ac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737946226431726734,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9pg5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b532db91-62b9-4bee-bbc9-1613f5989325,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21c957636d37d8c0760bf8d712101ad72bf0a6bda19423de8053e78d2285f11d,PodSandboxId:d2fb98eb658ae19ffbd127d5cb1e6319aca30b95a485a3a4e3a3919af9061dc9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737946226406205568,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-f22h8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dad4f16-9af8-4d93-9cf5-232c0a0935a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3ce858dc5535eb7b195ff30036e1f1f5b6fdf5698b50df27ef513cc82e34256,PodSandboxId:755a82eacc889b0e328e7a75614c5e4c3f1d127d4796daaa85f72b8bdb41b520,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737946221671634862,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: d437827597bc003de56f7a6187cae3a5,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:437dd26112fe8d6fefdc85356c026c96984ad2fff136a2a4327d658f9442e464,PodSandboxId:b6b7226c8ff780b6cdafaf2d15c402103b5f97f4a48acec815817351f761c3f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737946221681890230,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7
36acc6ecf8b13aad639a8cb1f8bd63,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c45f3cea74699716fd13b9d40fc91d0b27d5e1cc5cfcca8cc9e1c5dac734fd53,PodSandboxId:27d12b4cc7ec869ed20a79b3f74adea03efa326fdf53913d7c405d65e82a11cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737946221558217377,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c55da833af33a845ed3f
4bcc5624da47,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c5b6075275813cb01bde4bf7366b0d6ffa9336e295c09121b9f3cd66671721,PodSandboxId:a35453f75adbb90b0a9fbc30cfe45a92c23569aeaef3de45b9519afe4a6324e4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737946221563863068,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06d52062ced041c1b7c3c9bf4475a5b1,},Annotations:map[string]string{io.
kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:825a3f8f6a885295c9c9e9439f12deaad49e21c747b3809d5cff027c4b9e1819,PodSandboxId:490f4f6c968c5cd52220544ac5fac788cea22eff14c7f0b3839456731e778e28,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1737946215566892997,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9pg5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b532db91-62b9-4bee-bbc9-1613f5989325,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b29
3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:723b4e88e62109878bd5c6cf65e85c07583b1be9a253ed0436768f103dcf7b56,PodSandboxId:bf5215f0aedd701090cfe09782f59164141f7067082adcb076c382b265f92c2a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1737946215632820908,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d437827597bc003de56f7a6187cae3a5,},Annotations:map[string]string{io.kubernetes.container.hash:
16baf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcab89d41125a8414b58de1d77550e1159a49e4d71ff4369d142f7e3501b9229,PodSandboxId:b2b582d5d4f3a5f7d33c84868be88ac6d9fa7c92542626a39a4b583d13f3f2d1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1737946215628464124,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c55da833af33a845ed3f4bcc5624da47,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d62e04f6f5e13451192cf7b3275a99d64a2c9696cf576a5ee12858cfa3dc94c3,PodSandboxId:1f1e9f60ffcf2e9b104554631fc4da65c2fc13ca9d6ee88ea4ccb9cbfa699758,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737946215496141284,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e736acc6ecf8b13aad639a8cb1f8bd63,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.resta
rtCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90e530f8899a063641c0232d3a8d81ba60eab63da53061aa8f6e54d35a989686,PodSandboxId:8ec628c2a5c8bed1a6f7ffd9286f75e493f5ccb71814e9a4ee6da4f782d8300f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1737946215400445438,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06d52062ced041c1b7c3c9bf4475a5b1,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe790ab890a4916fdd3f58edb0503dd303a6e1cf28eb9c753b63fc4ecbb7ddeb,PodSandboxId:ab2780aabca04b216184097a282c72ef64c686ec523d744186d4ac3c5b69e493,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1737946151284840755,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-f22h8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dad4f16-9af8-4d93-9cf5-232c0a0935a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f361c27f-84d1-4565-85ee-44c3f2718c72 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 02:50:44 pause-622238 crio[2912]: time="2025-01-27 02:50:44.660123478Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5527eae4-c4e5-4160-b7fd-afc532e0ea66 name=/runtime.v1.RuntimeService/Version
	Jan 27 02:50:44 pause-622238 crio[2912]: time="2025-01-27 02:50:44.660197584Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5527eae4-c4e5-4160-b7fd-afc532e0ea66 name=/runtime.v1.RuntimeService/Version
	Jan 27 02:50:44 pause-622238 crio[2912]: time="2025-01-27 02:50:44.661605273Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d9097381-5d65-48d4-9809-e5e4d763d827 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 02:50:44 pause-622238 crio[2912]: time="2025-01-27 02:50:44.662390490Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737946244662364711,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d9097381-5d65-48d4-9809-e5e4d763d827 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 02:50:44 pause-622238 crio[2912]: time="2025-01-27 02:50:44.662992278Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=30dd8288-9c1e-455c-8a1b-db73fbb4a778 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 02:50:44 pause-622238 crio[2912]: time="2025-01-27 02:50:44.663043708Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=30dd8288-9c1e-455c-8a1b-db73fbb4a778 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 02:50:44 pause-622238 crio[2912]: time="2025-01-27 02:50:44.663275098Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a8134cd7d02fa6732b6a82499a19cd8e722326cd3d251615c779006744788733,PodSandboxId:1ffeec06b0153cfa9999e512843a074b2d6534e439f7a45479cec9f5414895ac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737946226431726734,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9pg5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b532db91-62b9-4bee-bbc9-1613f5989325,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21c957636d37d8c0760bf8d712101ad72bf0a6bda19423de8053e78d2285f11d,PodSandboxId:d2fb98eb658ae19ffbd127d5cb1e6319aca30b95a485a3a4e3a3919af9061dc9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737946226406205568,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-f22h8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dad4f16-9af8-4d93-9cf5-232c0a0935a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3ce858dc5535eb7b195ff30036e1f1f5b6fdf5698b50df27ef513cc82e34256,PodSandboxId:755a82eacc889b0e328e7a75614c5e4c3f1d127d4796daaa85f72b8bdb41b520,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737946221671634862,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: d437827597bc003de56f7a6187cae3a5,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:437dd26112fe8d6fefdc85356c026c96984ad2fff136a2a4327d658f9442e464,PodSandboxId:b6b7226c8ff780b6cdafaf2d15c402103b5f97f4a48acec815817351f761c3f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737946221681890230,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7
36acc6ecf8b13aad639a8cb1f8bd63,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c45f3cea74699716fd13b9d40fc91d0b27d5e1cc5cfcca8cc9e1c5dac734fd53,PodSandboxId:27d12b4cc7ec869ed20a79b3f74adea03efa326fdf53913d7c405d65e82a11cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737946221558217377,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c55da833af33a845ed3f
4bcc5624da47,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c5b6075275813cb01bde4bf7366b0d6ffa9336e295c09121b9f3cd66671721,PodSandboxId:a35453f75adbb90b0a9fbc30cfe45a92c23569aeaef3de45b9519afe4a6324e4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737946221563863068,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06d52062ced041c1b7c3c9bf4475a5b1,},Annotations:map[string]string{io.
kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:825a3f8f6a885295c9c9e9439f12deaad49e21c747b3809d5cff027c4b9e1819,PodSandboxId:490f4f6c968c5cd52220544ac5fac788cea22eff14c7f0b3839456731e778e28,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1737946215566892997,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9pg5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b532db91-62b9-4bee-bbc9-1613f5989325,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b29
3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:723b4e88e62109878bd5c6cf65e85c07583b1be9a253ed0436768f103dcf7b56,PodSandboxId:bf5215f0aedd701090cfe09782f59164141f7067082adcb076c382b265f92c2a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1737946215632820908,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d437827597bc003de56f7a6187cae3a5,},Annotations:map[string]string{io.kubernetes.container.hash:
16baf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcab89d41125a8414b58de1d77550e1159a49e4d71ff4369d142f7e3501b9229,PodSandboxId:b2b582d5d4f3a5f7d33c84868be88ac6d9fa7c92542626a39a4b583d13f3f2d1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1737946215628464124,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c55da833af33a845ed3f4bcc5624da47,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d62e04f6f5e13451192cf7b3275a99d64a2c9696cf576a5ee12858cfa3dc94c3,PodSandboxId:1f1e9f60ffcf2e9b104554631fc4da65c2fc13ca9d6ee88ea4ccb9cbfa699758,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737946215496141284,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e736acc6ecf8b13aad639a8cb1f8bd63,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.resta
rtCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90e530f8899a063641c0232d3a8d81ba60eab63da53061aa8f6e54d35a989686,PodSandboxId:8ec628c2a5c8bed1a6f7ffd9286f75e493f5ccb71814e9a4ee6da4f782d8300f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1737946215400445438,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-622238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06d52062ced041c1b7c3c9bf4475a5b1,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe790ab890a4916fdd3f58edb0503dd303a6e1cf28eb9c753b63fc4ecbb7ddeb,PodSandboxId:ab2780aabca04b216184097a282c72ef64c686ec523d744186d4ac3c5b69e493,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1737946151284840755,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-f22h8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dad4f16-9af8-4d93-9cf5-232c0a0935a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=30dd8288-9c1e-455c-8a1b-db73fbb4a778 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	a8134cd7d02fa       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a   18 seconds ago       Running             kube-proxy                2                   1ffeec06b0153       kube-proxy-9pg5p
	21c957636d37d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   18 seconds ago       Running             coredns                   1                   d2fb98eb658ae       coredns-668d6bf9bc-f22h8
	437dd26112fe8       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a   23 seconds ago       Running             kube-apiserver            2                   b6b7226c8ff78       kube-apiserver-pause-622238
	c3ce858dc5535       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35   23 seconds ago       Running             kube-controller-manager   2                   755a82eacc889       kube-controller-manager-pause-622238
	48c5b60752758       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   23 seconds ago       Running             etcd                      2                   a35453f75adbb       etcd-pause-622238
	c45f3cea74699       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1   23 seconds ago       Running             kube-scheduler            2                   27d12b4cc7ec8       kube-scheduler-pause-622238
	723b4e88e6210       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35   29 seconds ago       Exited              kube-controller-manager   1                   bf5215f0aedd7       kube-controller-manager-pause-622238
	bcab89d41125a       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1   29 seconds ago       Exited              kube-scheduler            1                   b2b582d5d4f3a       kube-scheduler-pause-622238
	825a3f8f6a885       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a   29 seconds ago       Exited              kube-proxy                1                   490f4f6c968c5       kube-proxy-9pg5p
	d62e04f6f5e13       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a   29 seconds ago       Exited              kube-apiserver            1                   1f1e9f60ffcf2       kube-apiserver-pause-622238
	90e530f8899a0       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   29 seconds ago       Exited              etcd                      1                   8ec628c2a5c8b       etcd-pause-622238
	fe790ab890a49       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   About a minute ago   Exited              coredns                   0                   ab2780aabca04       coredns-668d6bf9bc-f22h8
	
	
	==> coredns [21c957636d37d8c0760bf8d712101ad72bf0a6bda19423de8053e78d2285f11d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:52311 - 830 "HINFO IN 3172464895456124788.1314168427043943882. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015358136s
	
	
	==> coredns [fe790ab890a4916fdd3f58edb0503dd303a6e1cf28eb9c753b63fc4ecbb7ddeb] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/kubernetes: Trace[313740481]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Jan-2025 02:49:11.503) (total time: 28922ms):
	Trace[313740481]: ---"Objects listed" error:<nil> 28922ms (02:49:40.426)
	Trace[313740481]: [28.922262662s] [28.922262662s] END
	[INFO] plugin/kubernetes: Trace[1652299318]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Jan-2025 02:49:11.504) (total time: 28921ms):
	Trace[1652299318]: ---"Objects listed" error:<nil> 28921ms (02:49:40.426)
	Trace[1652299318]: [28.921831874s] [28.921831874s] END
	[INFO] plugin/kubernetes: Trace[1508934810]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Jan-2025 02:49:11.503) (total time: 28923ms):
	Trace[1508934810]: ---"Objects listed" error:<nil> 28923ms (02:49:40.426)
	Trace[1508934810]: [28.92363372s] [28.92363372s] END
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	[INFO] Reloading complete
	[INFO] 127.0.0.1:55012 - 15646 "HINFO IN 741992024223013000.3069113336796665424. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.152961392s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-622238
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-622238
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6bb462d349d93b9bf1c5a4f87817e5e9ea11cc95
	                    minikube.k8s.io/name=pause-622238
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T02_49_06_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 02:49:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-622238
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 02:50:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 02:50:25 +0000   Mon, 27 Jan 2025 02:49:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 02:50:25 +0000   Mon, 27 Jan 2025 02:49:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 02:50:25 +0000   Mon, 27 Jan 2025 02:49:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 02:50:25 +0000   Mon, 27 Jan 2025 02:49:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.58
	  Hostname:    pause-622238
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 e3bf4199713e4d72a71493668e8f4425
	  System UUID:                e3bf4199-713e-4d72-a714-93668e8f4425
	  Boot ID:                    f75fc88e-62c3-4db5-aba9-d74454c0bb37
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-f22h8                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     94s
	  kube-system                 etcd-pause-622238                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         99s
	  kube-system                 kube-apiserver-pause-622238             250m (12%)    0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 kube-controller-manager-pause-622238    200m (10%)    0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 kube-proxy-9pg5p                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 kube-scheduler-pause-622238             100m (5%)     0 (0%)      0 (0%)           0 (0%)         99s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 93s                kube-proxy       
	  Normal  Starting                 18s                kube-proxy       
	  Normal  NodeHasSufficientPID     99s                kubelet          Node pause-622238 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  99s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  99s                kubelet          Node pause-622238 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    99s                kubelet          Node pause-622238 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 99s                kubelet          Starting kubelet.
	  Normal  NodeReady                98s                kubelet          Node pause-622238 status is now: NodeReady
	  Normal  RegisteredNode           95s                node-controller  Node pause-622238 event: Registered Node pause-622238 in Controller
	  Normal  Starting                 23s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x8 over 23s)  kubelet          Node pause-622238 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 23s)  kubelet          Node pause-622238 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 23s)  kubelet          Node pause-622238 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           16s                node-controller  Node pause-622238 event: Registered Node pause-622238 in Controller
	
	
	==> dmesg <==
	[  +9.295555] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.060036] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057599] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.192391] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.128417] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.286705] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +4.162484] systemd-fstab-generator[752]: Ignoring "noauto" option for root device
	[  +4.389723] systemd-fstab-generator[887]: Ignoring "noauto" option for root device
	[  +0.061939] kauditd_printk_skb: 158 callbacks suppressed
	[Jan27 02:49] systemd-fstab-generator[1245]: Ignoring "noauto" option for root device
	[  +0.078344] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.851540] systemd-fstab-generator[1376]: Ignoring "noauto" option for root device
	[  +0.494494] kauditd_printk_skb: 43 callbacks suppressed
	[ +11.333731] kauditd_printk_skb: 66 callbacks suppressed
	[Jan27 02:50] systemd-fstab-generator[2329]: Ignoring "noauto" option for root device
	[  +0.166853] systemd-fstab-generator[2341]: Ignoring "noauto" option for root device
	[  +0.205251] systemd-fstab-generator[2355]: Ignoring "noauto" option for root device
	[  +0.307503] systemd-fstab-generator[2463]: Ignoring "noauto" option for root device
	[  +1.156818] systemd-fstab-generator[2877]: Ignoring "noauto" option for root device
	[  +1.869371] systemd-fstab-generator[3492]: Ignoring "noauto" option for root device
	[  +2.599346] systemd-fstab-generator[3616]: Ignoring "noauto" option for root device
	[  +0.076620] kauditd_printk_skb: 238 callbacks suppressed
	[  +5.605690] kauditd_printk_skb: 38 callbacks suppressed
	[  +6.736292] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.592729] systemd-fstab-generator[4082]: Ignoring "noauto" option for root device
	
	
	==> etcd [48c5b6075275813cb01bde4bf7366b0d6ffa9336e295c09121b9f3cd66671721] <==
	{"level":"warn","ts":"2025-01-27T02:50:28.809183Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"166.718646ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" limit:1 ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2025-01-27T02:50:28.809434Z","caller":"traceutil/trace.go:171","msg":"trace[1467415693] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:419; }","duration":"167.054472ms","start":"2025-01-27T02:50:28.642365Z","end":"2025-01-27T02:50:28.809420Z","steps":["trace[1467415693] 'agreement among raft nodes before linearized reading'  (duration: 166.705349ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T02:50:28.810123Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.140916ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/legacy-service-account-token-cleaner\" limit:1 ","response":"range_response_count:1 size:238"}
	{"level":"info","ts":"2025-01-27T02:50:28.812043Z","caller":"traceutil/trace.go:171","msg":"trace[1746586585] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/legacy-service-account-token-cleaner; range_end:; response_count:1; response_revision:420; }","duration":"121.090381ms","start":"2025-01-27T02:50:28.690937Z","end":"2025-01-27T02:50:28.812027Z","steps":["trace[1746586585] 'agreement among raft nodes before linearized reading'  (duration: 119.087077ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T02:50:28.812676Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.918091ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-668d6bf9bc-f22h8\" limit:1 ","response":"range_response_count:1 size:5147"}
	{"level":"info","ts":"2025-01-27T02:50:28.812956Z","caller":"traceutil/trace.go:171","msg":"trace[27740484] range","detail":"{range_begin:/registry/pods/kube-system/coredns-668d6bf9bc-f22h8; range_end:; response_count:1; response_revision:420; }","duration":"108.21522ms","start":"2025-01-27T02:50:28.704720Z","end":"2025-01-27T02:50:28.812935Z","steps":["trace[27740484] 'agreement among raft nodes before linearized reading'  (duration: 107.903018ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T02:50:28.977478Z","caller":"traceutil/trace.go:171","msg":"trace[512016606] transaction","detail":"{read_only:false; response_revision:421; number_of_response:1; }","duration":"133.065152ms","start":"2025-01-27T02:50:28.842201Z","end":"2025-01-27T02:50:28.975266Z","steps":["trace[512016606] 'process raft request'  (duration: 130.011944ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T02:50:29.330992Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.637771ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14849094360650042599 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/pause-622238.181e6ce150a2410a\" mod_revision:441 > success:<request_put:<key:\"/registry/events/default/pause-622238.181e6ce150a2410a\" value_size:594 lease:5625722323795266741 >> failure:<request_range:<key:\"/registry/events/default/pause-622238.181e6ce150a2410a\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-01-27T02:50:29.331330Z","caller":"traceutil/trace.go:171","msg":"trace[1979705480] linearizableReadLoop","detail":"{readStateIndex:489; appliedIndex:488; }","duration":"126.025395ms","start":"2025-01-27T02:50:29.205208Z","end":"2025-01-27T02:50:29.331234Z","steps":["trace[1979705480] 'read index received'  (duration: 465.1µs)","trace[1979705480] 'applied index is now lower than readState.Index'  (duration: 125.559059ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T02:50:29.331569Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.351274ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-668d6bf9bc-f22h8\" limit:1 ","response":"range_response_count:1 size:5147"}
	{"level":"info","ts":"2025-01-27T02:50:29.331639Z","caller":"traceutil/trace.go:171","msg":"trace[1592011581] range","detail":"{range_begin:/registry/pods/kube-system/coredns-668d6bf9bc-f22h8; range_end:; response_count:1; response_revision:444; }","duration":"126.460482ms","start":"2025-01-27T02:50:29.205170Z","end":"2025-01-27T02:50:29.331631Z","steps":["trace[1592011581] 'agreement among raft nodes before linearized reading'  (duration: 126.271874ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T02:50:29.331823Z","caller":"traceutil/trace.go:171","msg":"trace[869005392] transaction","detail":"{read_only:false; response_revision:444; number_of_response:1; }","duration":"140.369071ms","start":"2025-01-27T02:50:29.191445Z","end":"2025-01-27T02:50:29.331814Z","steps":["trace[869005392] 'process raft request'  (duration: 14.330634ms)","trace[869005392] 'compare'  (duration: 124.471718ms)"],"step_count":2}
	{"level":"info","ts":"2025-01-27T02:50:32.621147Z","caller":"traceutil/trace.go:171","msg":"trace[145569241] transaction","detail":"{read_only:false; response_revision:469; number_of_response:1; }","duration":"279.979989ms","start":"2025-01-27T02:50:32.341134Z","end":"2025-01-27T02:50:32.621114Z","steps":["trace[145569241] 'process raft request'  (duration: 279.734042ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T02:50:33.238354Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"479.820404ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14849094360650042646 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-dns\" mod_revision:424 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-dns\" value_size:728 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/kube-dns\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-01-27T02:50:33.238526Z","caller":"traceutil/trace.go:171","msg":"trace[63584628] transaction","detail":"{read_only:false; response_revision:470; number_of_response:1; }","duration":"602.702334ms","start":"2025-01-27T02:50:32.635800Z","end":"2025-01-27T02:50:33.238502Z","steps":["trace[63584628] 'process raft request'  (duration: 122.592598ms)","trace[63584628] 'compare'  (duration: 479.686374ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T02:50:33.238615Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T02:50:32.635772Z","time spent":"602.805823ms","remote":"127.0.0.1:34082","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":785,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-dns\" mod_revision:424 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-dns\" value_size:728 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/kube-dns\" > >"}
	{"level":"info","ts":"2025-01-27T02:50:33.239407Z","caller":"traceutil/trace.go:171","msg":"trace[1971086333] linearizableReadLoop","detail":"{readStateIndex:517; appliedIndex:514; }","duration":"534.882454ms","start":"2025-01-27T02:50:32.704514Z","end":"2025-01-27T02:50:33.239396Z","steps":["trace[1971086333] 'read index received'  (duration: 53.889529ms)","trace[1971086333] 'applied index is now lower than readState.Index'  (duration: 480.992291ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T02:50:33.239579Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"535.055214ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-668d6bf9bc-f22h8\" limit:1 ","response":"range_response_count:1 size:4969"}
	{"level":"info","ts":"2025-01-27T02:50:33.239902Z","caller":"traceutil/trace.go:171","msg":"trace[1088969542] range","detail":"{range_begin:/registry/pods/kube-system/coredns-668d6bf9bc-f22h8; range_end:; response_count:1; response_revision:472; }","duration":"535.399465ms","start":"2025-01-27T02:50:32.704488Z","end":"2025-01-27T02:50:33.239888Z","steps":["trace[1088969542] 'agreement among raft nodes before linearized reading'  (duration: 535.034544ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T02:50:33.239965Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T02:50:32.704474Z","time spent":"535.478745ms","remote":"127.0.0.1:34094","response type":"/etcdserverpb.KV/Range","request count":0,"request size":55,"response count":1,"response size":4992,"request content":"key:\"/registry/pods/kube-system/coredns-668d6bf9bc-f22h8\" limit:1 "}
	{"level":"info","ts":"2025-01-27T02:50:33.240438Z","caller":"traceutil/trace.go:171","msg":"trace[1606065146] transaction","detail":"{read_only:false; response_revision:472; number_of_response:1; }","duration":"603.305307ms","start":"2025-01-27T02:50:32.637123Z","end":"2025-01-27T02:50:33.240429Z","steps":["trace[1606065146] 'process raft request'  (duration: 602.230759ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T02:50:33.240695Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T02:50:32.637105Z","time spent":"603.53587ms","remote":"127.0.0.1:34408","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3830,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-668d6bf9bc\" mod_revision:422 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-668d6bf9bc\" value_size:3770 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/coredns-668d6bf9bc\" > >"}
	{"level":"info","ts":"2025-01-27T02:50:33.240471Z","caller":"traceutil/trace.go:171","msg":"trace[545986295] transaction","detail":"{read_only:false; response_revision:471; number_of_response:1; }","duration":"604.376468ms","start":"2025-01-27T02:50:32.636089Z","end":"2025-01-27T02:50:33.240465Z","steps":["trace[545986295] 'process raft request'  (duration: 603.132964ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T02:50:33.240921Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T02:50:32.636071Z","time spent":"604.811626ms","remote":"127.0.0.1:34176","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1252,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/endpointslices/kube-system/kube-dns-6lmkt\" mod_revision:427 > success:<request_put:<key:\"/registry/endpointslices/kube-system/kube-dns-6lmkt\" value_size:1193 >> failure:<request_range:<key:\"/registry/endpointslices/kube-system/kube-dns-6lmkt\" > >"}
	{"level":"info","ts":"2025-01-27T02:50:33.487249Z","caller":"traceutil/trace.go:171","msg":"trace[1516247128] transaction","detail":"{read_only:false; response_revision:473; number_of_response:1; }","duration":"227.109598ms","start":"2025-01-27T02:50:33.260110Z","end":"2025-01-27T02:50:33.487220Z","steps":["trace[1516247128] 'process raft request'  (duration: 218.679433ms)"],"step_count":1}
	
	
	==> etcd [90e530f8899a063641c0232d3a8d81ba60eab63da53061aa8f6e54d35a989686] <==
	{"level":"info","ts":"2025-01-27T02:50:15.999913Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2025-01-27T02:50:16.023509Z","caller":"etcdserver/raft.go:540","msg":"restarting local member","cluster-id":"94a432db9bee1c6","local-member-id":"b223154dc276ce12","commit-index":425}
	{"level":"info","ts":"2025-01-27T02:50:16.023783Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b223154dc276ce12 switched to configuration voters=()"}
	{"level":"info","ts":"2025-01-27T02:50:16.023855Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b223154dc276ce12 became follower at term 2"}
	{"level":"info","ts":"2025-01-27T02:50:16.023876Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft b223154dc276ce12 [peers: [], term: 2, commit: 425, applied: 0, lastindex: 425, lastterm: 2]"}
	{"level":"warn","ts":"2025-01-27T02:50:16.039459Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2025-01-27T02:50:16.053527Z","caller":"mvcc/kvstore.go:423","msg":"kvstore restored","current-rev":402}
	{"level":"info","ts":"2025-01-27T02:50:16.072786Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2025-01-27T02:50:16.075081Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"b223154dc276ce12","timeout":"7s"}
	{"level":"info","ts":"2025-01-27T02:50:16.075387Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"b223154dc276ce12"}
	{"level":"info","ts":"2025-01-27T02:50:16.075429Z","caller":"etcdserver/server.go:873","msg":"starting etcd server","local-member-id":"b223154dc276ce12","local-server-version":"3.5.16","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2025-01-27T02:50:16.075711Z","caller":"etcdserver/server.go:773","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-01-27T02:50:16.075897Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-01-27T02:50:16.075921Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-01-27T02:50:16.075927Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-01-27T02:50:16.076217Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b223154dc276ce12 switched to configuration voters=(12836126786655276562)"}
	{"level":"info","ts":"2025-01-27T02:50:16.076264Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"94a432db9bee1c6","local-member-id":"b223154dc276ce12","added-peer-id":"b223154dc276ce12","added-peer-peer-urls":["https://192.168.50.58:2380"]}
	{"level":"info","ts":"2025-01-27T02:50:16.076417Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"94a432db9bee1c6","local-member-id":"b223154dc276ce12","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T02:50:16.076449Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T02:50:16.080629Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T02:50:16.087556Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-01-27T02:50:16.087835Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.50.58:2380"}
	{"level":"info","ts":"2025-01-27T02:50:16.087846Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.50.58:2380"}
	{"level":"info","ts":"2025-01-27T02:50:16.089017Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-01-27T02:50:16.088961Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"b223154dc276ce12","initial-advertise-peer-urls":["https://192.168.50.58:2380"],"listen-peer-urls":["https://192.168.50.58:2380"],"advertise-client-urls":["https://192.168.50.58:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.58:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	
	
	==> kernel <==
	 02:50:45 up 2 min,  0 users,  load average: 0.68, 0.30, 0.11
	Linux pause-622238 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [437dd26112fe8d6fefdc85356c026c96984ad2fff136a2a4327d658f9442e464] <==
	I0127 02:50:25.415084       1 policy_source.go:240] refreshing policies
	I0127 02:50:25.416071       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0127 02:50:25.416261       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0127 02:50:25.416629       1 shared_informer.go:320] Caches are synced for configmaps
	I0127 02:50:25.416699       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0127 02:50:25.418526       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0127 02:50:25.428557       1 aggregator.go:171] initial CRD sync complete...
	I0127 02:50:25.428683       1 autoregister_controller.go:144] Starting autoregister controller
	I0127 02:50:25.428716       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0127 02:50:25.428739       1 cache.go:39] Caches are synced for autoregister controller
	I0127 02:50:25.432022       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0127 02:50:25.432104       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0127 02:50:25.436631       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0127 02:50:25.446268       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0127 02:50:25.459672       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0127 02:50:25.461621       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0127 02:50:26.168627       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0127 02:50:26.238156       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0127 02:50:27.412746       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0127 02:50:27.592262       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0127 02:50:27.654242       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0127 02:50:27.667685       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0127 02:50:28.968088       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0127 02:50:28.995852       1 controller.go:615] quota admission added evaluator for: endpoints
	I0127 02:50:29.008688       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [d62e04f6f5e13451192cf7b3275a99d64a2c9696cf576a5ee12858cfa3dc94c3] <==
	W0127 02:50:16.241973       1 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags
	I0127 02:50:16.242734       1 options.go:238] external host was not specified, using 192.168.50.58
	I0127 02:50:16.252137       1 server.go:143] Version: v1.32.1
	I0127 02:50:16.252478       1 server.go:145] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-controller-manager [723b4e88e62109878bd5c6cf65e85c07583b1be9a253ed0436768f103dcf7b56] <==
	
	
	==> kube-controller-manager [c3ce858dc5535eb7b195ff30036e1f1f5b6fdf5698b50df27ef513cc82e34256] <==
	I0127 02:50:28.604666       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0127 02:50:28.602436       1 shared_informer.go:320] Caches are synced for daemon sets
	I0127 02:50:28.602461       1 shared_informer.go:320] Caches are synced for endpoint
	I0127 02:50:28.614873       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0127 02:50:28.602481       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0127 02:50:28.602493       1 shared_informer.go:320] Caches are synced for persistent volume
	I0127 02:50:28.602512       1 shared_informer.go:320] Caches are synced for cronjob
	I0127 02:50:28.608152       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0127 02:50:28.620074       1 shared_informer.go:320] Caches are synced for stateful set
	I0127 02:50:28.622360       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0127 02:50:28.623837       1 shared_informer.go:320] Caches are synced for resource quota
	I0127 02:50:28.630247       1 shared_informer.go:320] Caches are synced for namespace
	I0127 02:50:28.635732       1 shared_informer.go:320] Caches are synced for ephemeral
	I0127 02:50:28.638271       1 shared_informer.go:320] Caches are synced for job
	I0127 02:50:28.643229       1 shared_informer.go:320] Caches are synced for garbage collector
	I0127 02:50:28.643254       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0127 02:50:28.643271       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0127 02:50:28.643692       1 shared_informer.go:320] Caches are synced for crt configmap
	I0127 02:50:28.645790       1 shared_informer.go:320] Caches are synced for attach detach
	I0127 02:50:28.645814       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0127 02:50:28.657986       1 shared_informer.go:320] Caches are synced for garbage collector
	I0127 02:50:28.985716       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="391.102136ms"
	I0127 02:50:28.989284       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="76.737µs"
	I0127 02:50:33.248242       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="618.069225ms"
	I0127 02:50:33.249453       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="58.678µs"
	
	
	==> kube-proxy [825a3f8f6a885295c9c9e9439f12deaad49e21c747b3809d5cff027c4b9e1819] <==
	
	
	==> kube-proxy [a8134cd7d02fa6732b6a82499a19cd8e722326cd3d251615c779006744788733] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 02:50:26.815625       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 02:50:26.829931       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.50.58"]
	E0127 02:50:26.830093       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 02:50:26.873423       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 02:50:26.873490       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 02:50:26.873526       1 server_linux.go:170] "Using iptables Proxier"
	I0127 02:50:26.878543       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 02:50:26.878959       1 server.go:497] "Version info" version="v1.32.1"
	I0127 02:50:26.878985       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 02:50:26.883624       1 config.go:329] "Starting node config controller"
	I0127 02:50:26.883733       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 02:50:26.884072       1 config.go:199] "Starting service config controller"
	I0127 02:50:26.884108       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 02:50:26.884132       1 config.go:105] "Starting endpoint slice config controller"
	I0127 02:50:26.884138       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 02:50:26.984361       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 02:50:26.984367       1 shared_informer.go:320] Caches are synced for node config
	I0127 02:50:26.984391       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [bcab89d41125a8414b58de1d77550e1159a49e4d71ff4369d142f7e3501b9229] <==
	
	
	==> kube-scheduler [c45f3cea74699716fd13b9d40fc91d0b27d5e1cc5cfcca8cc9e1c5dac734fd53] <==
	I0127 02:50:22.847597       1 serving.go:386] Generated self-signed cert in-memory
	I0127 02:50:25.492282       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0127 02:50:25.492382       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 02:50:25.512369       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0127 02:50:25.512757       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0127 02:50:25.512829       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0127 02:50:25.512901       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0127 02:50:25.512952       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 02:50:25.512991       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0127 02:50:25.513020       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0127 02:50:25.514973       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0127 02:50:25.613970       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0127 02:50:25.613984       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0127 02:50:25.614005       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 02:50:25 pause-622238 kubelet[3623]: I0127 02:50:25.461184    3623 kubelet_node_status.go:125] "Node was previously registered" node="pause-622238"
	Jan 27 02:50:25 pause-622238 kubelet[3623]: I0127 02:50:25.461351    3623 kubelet_node_status.go:79] "Successfully registered node" node="pause-622238"
	Jan 27 02:50:25 pause-622238 kubelet[3623]: I0127 02:50:25.461385    3623 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jan 27 02:50:25 pause-622238 kubelet[3623]: I0127 02:50:25.462592    3623 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jan 27 02:50:25 pause-622238 kubelet[3623]: I0127 02:50:25.490017    3623 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-622238"
	Jan 27 02:50:25 pause-622238 kubelet[3623]: E0127 02:50:25.559732    3623 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-622238\" already exists" pod="kube-system/kube-controller-manager-pause-622238"
	Jan 27 02:50:25 pause-622238 kubelet[3623]: I0127 02:50:25.559778    3623 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-622238"
	Jan 27 02:50:25 pause-622238 kubelet[3623]: E0127 02:50:25.575009    3623 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-622238\" already exists" pod="kube-system/kube-scheduler-pause-622238"
	Jan 27 02:50:25 pause-622238 kubelet[3623]: I0127 02:50:25.575051    3623 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-622238"
	Jan 27 02:50:25 pause-622238 kubelet[3623]: E0127 02:50:25.594277    3623 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-pause-622238\" already exists" pod="kube-system/etcd-pause-622238"
	Jan 27 02:50:25 pause-622238 kubelet[3623]: I0127 02:50:25.594375    3623 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-622238"
	Jan 27 02:50:25 pause-622238 kubelet[3623]: I0127 02:50:25.600438    3623 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-622238"
	Jan 27 02:50:25 pause-622238 kubelet[3623]: E0127 02:50:25.609444    3623 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-622238\" already exists" pod="kube-system/kube-apiserver-pause-622238"
	Jan 27 02:50:25 pause-622238 kubelet[3623]: E0127 02:50:25.612784    3623 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-pause-622238\" already exists" pod="kube-system/etcd-pause-622238"
	Jan 27 02:50:26 pause-622238 kubelet[3623]: I0127 02:50:26.074927    3623 apiserver.go:52] "Watching apiserver"
	Jan 27 02:50:26 pause-622238 kubelet[3623]: I0127 02:50:26.101092    3623 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jan 27 02:50:26 pause-622238 kubelet[3623]: I0127 02:50:26.158634    3623 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b532db91-62b9-4bee-bbc9-1613f5989325-xtables-lock\") pod \"kube-proxy-9pg5p\" (UID: \"b532db91-62b9-4bee-bbc9-1613f5989325\") " pod="kube-system/kube-proxy-9pg5p"
	Jan 27 02:50:26 pause-622238 kubelet[3623]: I0127 02:50:26.159227    3623 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b532db91-62b9-4bee-bbc9-1613f5989325-lib-modules\") pod \"kube-proxy-9pg5p\" (UID: \"b532db91-62b9-4bee-bbc9-1613f5989325\") " pod="kube-system/kube-proxy-9pg5p"
	Jan 27 02:50:26 pause-622238 kubelet[3623]: I0127 02:50:26.380990    3623 scope.go:117] "RemoveContainer" containerID="fe790ab890a4916fdd3f58edb0503dd303a6e1cf28eb9c753b63fc4ecbb7ddeb"
	Jan 27 02:50:26 pause-622238 kubelet[3623]: I0127 02:50:26.381630    3623 scope.go:117] "RemoveContainer" containerID="825a3f8f6a885295c9c9e9439f12deaad49e21c747b3809d5cff027c4b9e1819"
	Jan 27 02:50:31 pause-622238 kubelet[3623]: E0127 02:50:31.204199    3623 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737946231203422462,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 02:50:31 pause-622238 kubelet[3623]: E0127 02:50:31.204238    3623 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737946231203422462,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 02:50:32 pause-622238 kubelet[3623]: I0127 02:50:32.322349    3623 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jan 27 02:50:41 pause-622238 kubelet[3623]: E0127 02:50:41.207710    3623 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737946241206784868,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 02:50:41 pause-622238 kubelet[3623]: E0127 02:50:41.207752    3623 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737946241206784868,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-622238 -n pause-622238
helpers_test.go:261: (dbg) Run:  kubectl --context pause-622238 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (60.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (286.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-542356 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-542356 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m46.424341931s)

                                                
                                                
-- stdout --
	* [old-k8s-version-542356] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20316
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20316-897624/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-897624/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-542356" primary control-plane node in "old-k8s-version-542356" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 02:53:20.591117  945208 out.go:345] Setting OutFile to fd 1 ...
	I0127 02:53:20.591233  945208 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:53:20.591244  945208 out.go:358] Setting ErrFile to fd 2...
	I0127 02:53:20.591250  945208 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:53:20.591415  945208 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-897624/.minikube/bin
	I0127 02:53:20.592123  945208 out.go:352] Setting JSON to false
	I0127 02:53:20.593281  945208 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":12944,"bootTime":1737933457,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 02:53:20.593390  945208 start.go:139] virtualization: kvm guest
	I0127 02:53:20.595501  945208 out.go:177] * [old-k8s-version-542356] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 02:53:20.596736  945208 out.go:177]   - MINIKUBE_LOCATION=20316
	I0127 02:53:20.596737  945208 notify.go:220] Checking for updates...
	I0127 02:53:20.597929  945208 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 02:53:20.598991  945208 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20316-897624/kubeconfig
	I0127 02:53:20.600042  945208 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-897624/.minikube
	I0127 02:53:20.601146  945208 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 02:53:20.602226  945208 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 02:53:20.603636  945208 config.go:182] Loaded profile config "cert-expiration-591242": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 02:53:20.603733  945208 config.go:182] Loaded profile config "cert-options-919407": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 02:53:20.603816  945208 config.go:182] Loaded profile config "kubernetes-upgrade-080871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 02:53:20.603914  945208 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 02:53:20.640096  945208 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 02:53:20.641299  945208 start.go:297] selected driver: kvm2
	I0127 02:53:20.641314  945208 start.go:901] validating driver "kvm2" against <nil>
	I0127 02:53:20.641326  945208 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 02:53:20.642043  945208 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 02:53:20.642117  945208 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20316-897624/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 02:53:20.658799  945208 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 02:53:20.658867  945208 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 02:53:20.659105  945208 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 02:53:20.659137  945208 cni.go:84] Creating CNI manager for ""
	I0127 02:53:20.659180  945208 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 02:53:20.659191  945208 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 02:53:20.659249  945208 start.go:340] cluster config:
	{Name:old-k8s-version-542356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-542356 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I0127 02:53:20.659347  945208 iso.go:125] acquiring lock: {Name:mkbbe08fa3daa2372045069667a0a52e0e34abd9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 02:53:20.660941  945208 out.go:177] * Starting "old-k8s-version-542356" primary control-plane node in "old-k8s-version-542356" cluster
	I0127 02:53:20.661866  945208 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 02:53:20.661897  945208 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0127 02:53:20.661911  945208 cache.go:56] Caching tarball of preloaded images
	I0127 02:53:20.661990  945208 preload.go:172] Found /home/jenkins/minikube-integration/20316-897624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 02:53:20.662000  945208 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0127 02:53:20.662101  945208 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/old-k8s-version-542356/config.json ...
	I0127 02:53:20.662119  945208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/old-k8s-version-542356/config.json: {Name:mk3f5775f38647571fb96d56da7302e4c862ca91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:53:20.662238  945208 start.go:360] acquireMachinesLock for old-k8s-version-542356: {Name:mke71211974c740845fb9b00041df14f4e3ecd74 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 02:53:29.881489  945208 start.go:364] duration metric: took 9.219191664s to acquireMachinesLock for "old-k8s-version-542356"
	I0127 02:53:29.881575  945208 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-542356 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-versi
on-542356 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 02:53:29.881718  945208 start.go:125] createHost starting for "" (driver="kvm2")
	I0127 02:53:29.884361  945208 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0127 02:53:29.884618  945208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 02:53:29.884677  945208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:53:29.905727  945208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44321
	I0127 02:53:29.906225  945208 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:53:29.906826  945208 main.go:141] libmachine: Using API Version  1
	I0127 02:53:29.906854  945208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:53:29.907304  945208 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:53:29.907595  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetMachineName
	I0127 02:53:29.907834  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .DriverName
	I0127 02:53:29.908013  945208 start.go:159] libmachine.API.Create for "old-k8s-version-542356" (driver="kvm2")
	I0127 02:53:29.908042  945208 client.go:168] LocalClient.Create starting
	I0127 02:53:29.908077  945208 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem
	I0127 02:53:29.908119  945208 main.go:141] libmachine: Decoding PEM data...
	I0127 02:53:29.908140  945208 main.go:141] libmachine: Parsing certificate...
	I0127 02:53:29.908228  945208 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20316-897624/.minikube/certs/cert.pem
	I0127 02:53:29.908260  945208 main.go:141] libmachine: Decoding PEM data...
	I0127 02:53:29.908279  945208 main.go:141] libmachine: Parsing certificate...
	I0127 02:53:29.908311  945208 main.go:141] libmachine: Running pre-create checks...
	I0127 02:53:29.908326  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .PreCreateCheck
	I0127 02:53:29.908690  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetConfigRaw
	I0127 02:53:29.909232  945208 main.go:141] libmachine: Creating machine...
	I0127 02:53:29.909256  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .Create
	I0127 02:53:29.909397  945208 main.go:141] libmachine: (old-k8s-version-542356) creating KVM machine...
	I0127 02:53:29.909427  945208 main.go:141] libmachine: (old-k8s-version-542356) creating network...
	I0127 02:53:29.910732  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | found existing default KVM network
	I0127 02:53:29.912386  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | I0127 02:53:29.912264  945293 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00027c040}
	I0127 02:53:29.912448  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | created network xml: 
	I0127 02:53:29.912468  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | <network>
	I0127 02:53:29.912482  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG |   <name>mk-old-k8s-version-542356</name>
	I0127 02:53:29.912501  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG |   <dns enable='no'/>
	I0127 02:53:29.912509  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG |   
	I0127 02:53:29.912516  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0127 02:53:29.912524  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG |     <dhcp>
	I0127 02:53:29.912530  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0127 02:53:29.912538  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG |     </dhcp>
	I0127 02:53:29.912542  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG |   </ip>
	I0127 02:53:29.912557  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG |   
	I0127 02:53:29.912564  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | </network>
	I0127 02:53:29.912585  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | 
	I0127 02:53:29.918063  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | trying to create private KVM network mk-old-k8s-version-542356 192.168.39.0/24...
	I0127 02:53:29.991420  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | private KVM network mk-old-k8s-version-542356 192.168.39.0/24 created
	I0127 02:53:29.991452  945208 main.go:141] libmachine: (old-k8s-version-542356) setting up store path in /home/jenkins/minikube-integration/20316-897624/.minikube/machines/old-k8s-version-542356 ...
	I0127 02:53:29.991481  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | I0127 02:53:29.991394  945293 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20316-897624/.minikube
	I0127 02:53:29.991501  945208 main.go:141] libmachine: (old-k8s-version-542356) building disk image from file:///home/jenkins/minikube-integration/20316-897624/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 02:53:29.991563  945208 main.go:141] libmachine: (old-k8s-version-542356) Downloading /home/jenkins/minikube-integration/20316-897624/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20316-897624/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0127 02:53:30.283304  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | I0127 02:53:30.283160  945293 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/old-k8s-version-542356/id_rsa...
	I0127 02:53:30.340435  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | I0127 02:53:30.340298  945293 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/old-k8s-version-542356/old-k8s-version-542356.rawdisk...
	I0127 02:53:30.340471  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | Writing magic tar header
	I0127 02:53:30.340486  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | Writing SSH key tar header
	I0127 02:53:30.340516  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | I0127 02:53:30.340414  945293 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20316-897624/.minikube/machines/old-k8s-version-542356 ...
	I0127 02:53:30.340537  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/old-k8s-version-542356
	I0127 02:53:30.340555  945208 main.go:141] libmachine: (old-k8s-version-542356) setting executable bit set on /home/jenkins/minikube-integration/20316-897624/.minikube/machines/old-k8s-version-542356 (perms=drwx------)
	I0127 02:53:30.340570  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20316-897624/.minikube/machines
	I0127 02:53:30.340585  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20316-897624/.minikube
	I0127 02:53:30.340597  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20316-897624
	I0127 02:53:30.340612  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0127 02:53:30.340622  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | checking permissions on dir: /home/jenkins
	I0127 02:53:30.340632  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | checking permissions on dir: /home
	I0127 02:53:30.340653  945208 main.go:141] libmachine: (old-k8s-version-542356) setting executable bit set on /home/jenkins/minikube-integration/20316-897624/.minikube/machines (perms=drwxr-xr-x)
	I0127 02:53:30.340665  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | skipping /home - not owner
	I0127 02:53:30.340708  945208 main.go:141] libmachine: (old-k8s-version-542356) setting executable bit set on /home/jenkins/minikube-integration/20316-897624/.minikube (perms=drwxr-xr-x)
	I0127 02:53:30.340744  945208 main.go:141] libmachine: (old-k8s-version-542356) setting executable bit set on /home/jenkins/minikube-integration/20316-897624 (perms=drwxrwxr-x)
	I0127 02:53:30.340756  945208 main.go:141] libmachine: (old-k8s-version-542356) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0127 02:53:30.340770  945208 main.go:141] libmachine: (old-k8s-version-542356) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0127 02:53:30.340784  945208 main.go:141] libmachine: (old-k8s-version-542356) creating domain...
	I0127 02:53:30.341832  945208 main.go:141] libmachine: (old-k8s-version-542356) define libvirt domain using xml: 
	I0127 02:53:30.341863  945208 main.go:141] libmachine: (old-k8s-version-542356) <domain type='kvm'>
	I0127 02:53:30.341889  945208 main.go:141] libmachine: (old-k8s-version-542356)   <name>old-k8s-version-542356</name>
	I0127 02:53:30.341901  945208 main.go:141] libmachine: (old-k8s-version-542356)   <memory unit='MiB'>2200</memory>
	I0127 02:53:30.341908  945208 main.go:141] libmachine: (old-k8s-version-542356)   <vcpu>2</vcpu>
	I0127 02:53:30.341918  945208 main.go:141] libmachine: (old-k8s-version-542356)   <features>
	I0127 02:53:30.341927  945208 main.go:141] libmachine: (old-k8s-version-542356)     <acpi/>
	I0127 02:53:30.341937  945208 main.go:141] libmachine: (old-k8s-version-542356)     <apic/>
	I0127 02:53:30.341956  945208 main.go:141] libmachine: (old-k8s-version-542356)     <pae/>
	I0127 02:53:30.341967  945208 main.go:141] libmachine: (old-k8s-version-542356)     
	I0127 02:53:30.341997  945208 main.go:141] libmachine: (old-k8s-version-542356)   </features>
	I0127 02:53:30.342022  945208 main.go:141] libmachine: (old-k8s-version-542356)   <cpu mode='host-passthrough'>
	I0127 02:53:30.342050  945208 main.go:141] libmachine: (old-k8s-version-542356)   
	I0127 02:53:30.342073  945208 main.go:141] libmachine: (old-k8s-version-542356)   </cpu>
	I0127 02:53:30.342086  945208 main.go:141] libmachine: (old-k8s-version-542356)   <os>
	I0127 02:53:30.342097  945208 main.go:141] libmachine: (old-k8s-version-542356)     <type>hvm</type>
	I0127 02:53:30.342109  945208 main.go:141] libmachine: (old-k8s-version-542356)     <boot dev='cdrom'/>
	I0127 02:53:30.342119  945208 main.go:141] libmachine: (old-k8s-version-542356)     <boot dev='hd'/>
	I0127 02:53:30.342129  945208 main.go:141] libmachine: (old-k8s-version-542356)     <bootmenu enable='no'/>
	I0127 02:53:30.342139  945208 main.go:141] libmachine: (old-k8s-version-542356)   </os>
	I0127 02:53:30.342149  945208 main.go:141] libmachine: (old-k8s-version-542356)   <devices>
	I0127 02:53:30.342165  945208 main.go:141] libmachine: (old-k8s-version-542356)     <disk type='file' device='cdrom'>
	I0127 02:53:30.342184  945208 main.go:141] libmachine: (old-k8s-version-542356)       <source file='/home/jenkins/minikube-integration/20316-897624/.minikube/machines/old-k8s-version-542356/boot2docker.iso'/>
	I0127 02:53:30.342208  945208 main.go:141] libmachine: (old-k8s-version-542356)       <target dev='hdc' bus='scsi'/>
	I0127 02:53:30.342221  945208 main.go:141] libmachine: (old-k8s-version-542356)       <readonly/>
	I0127 02:53:30.342228  945208 main.go:141] libmachine: (old-k8s-version-542356)     </disk>
	I0127 02:53:30.342248  945208 main.go:141] libmachine: (old-k8s-version-542356)     <disk type='file' device='disk'>
	I0127 02:53:30.342268  945208 main.go:141] libmachine: (old-k8s-version-542356)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0127 02:53:30.342287  945208 main.go:141] libmachine: (old-k8s-version-542356)       <source file='/home/jenkins/minikube-integration/20316-897624/.minikube/machines/old-k8s-version-542356/old-k8s-version-542356.rawdisk'/>
	I0127 02:53:30.342295  945208 main.go:141] libmachine: (old-k8s-version-542356)       <target dev='hda' bus='virtio'/>
	I0127 02:53:30.342319  945208 main.go:141] libmachine: (old-k8s-version-542356)     </disk>
	I0127 02:53:30.342330  945208 main.go:141] libmachine: (old-k8s-version-542356)     <interface type='network'>
	I0127 02:53:30.342352  945208 main.go:141] libmachine: (old-k8s-version-542356)       <source network='mk-old-k8s-version-542356'/>
	I0127 02:53:30.342372  945208 main.go:141] libmachine: (old-k8s-version-542356)       <model type='virtio'/>
	I0127 02:53:30.342390  945208 main.go:141] libmachine: (old-k8s-version-542356)     </interface>
	I0127 02:53:30.342407  945208 main.go:141] libmachine: (old-k8s-version-542356)     <interface type='network'>
	I0127 02:53:30.342421  945208 main.go:141] libmachine: (old-k8s-version-542356)       <source network='default'/>
	I0127 02:53:30.342431  945208 main.go:141] libmachine: (old-k8s-version-542356)       <model type='virtio'/>
	I0127 02:53:30.342441  945208 main.go:141] libmachine: (old-k8s-version-542356)     </interface>
	I0127 02:53:30.342452  945208 main.go:141] libmachine: (old-k8s-version-542356)     <serial type='pty'>
	I0127 02:53:30.342463  945208 main.go:141] libmachine: (old-k8s-version-542356)       <target port='0'/>
	I0127 02:53:30.342470  945208 main.go:141] libmachine: (old-k8s-version-542356)     </serial>
	I0127 02:53:30.342482  945208 main.go:141] libmachine: (old-k8s-version-542356)     <console type='pty'>
	I0127 02:53:30.342503  945208 main.go:141] libmachine: (old-k8s-version-542356)       <target type='serial' port='0'/>
	I0127 02:53:30.342514  945208 main.go:141] libmachine: (old-k8s-version-542356)     </console>
	I0127 02:53:30.342526  945208 main.go:141] libmachine: (old-k8s-version-542356)     <rng model='virtio'>
	I0127 02:53:30.342540  945208 main.go:141] libmachine: (old-k8s-version-542356)       <backend model='random'>/dev/random</backend>
	I0127 02:53:30.342549  945208 main.go:141] libmachine: (old-k8s-version-542356)     </rng>
	I0127 02:53:30.342560  945208 main.go:141] libmachine: (old-k8s-version-542356)     
	I0127 02:53:30.342598  945208 main.go:141] libmachine: (old-k8s-version-542356)     
	I0127 02:53:30.342619  945208 main.go:141] libmachine: (old-k8s-version-542356)   </devices>
	I0127 02:53:30.342639  945208 main.go:141] libmachine: (old-k8s-version-542356) </domain>
	I0127 02:53:30.342658  945208 main.go:141] libmachine: (old-k8s-version-542356) 
	I0127 02:53:30.346354  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:47:08:f3 in network default
	I0127 02:53:30.347034  945208 main.go:141] libmachine: (old-k8s-version-542356) starting domain...
	I0127 02:53:30.347057  945208 main.go:141] libmachine: (old-k8s-version-542356) ensuring networks are active...
	I0127 02:53:30.347070  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:53:30.347827  945208 main.go:141] libmachine: (old-k8s-version-542356) Ensuring network default is active
	I0127 02:53:30.348238  945208 main.go:141] libmachine: (old-k8s-version-542356) Ensuring network mk-old-k8s-version-542356 is active
	I0127 02:53:30.348846  945208 main.go:141] libmachine: (old-k8s-version-542356) getting domain XML...
	I0127 02:53:30.349600  945208 main.go:141] libmachine: (old-k8s-version-542356) creating domain...
	I0127 02:53:31.727273  945208 main.go:141] libmachine: (old-k8s-version-542356) waiting for IP...
	I0127 02:53:31.728390  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:53:31.729050  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | unable to find current IP address of domain old-k8s-version-542356 in network mk-old-k8s-version-542356
	I0127 02:53:31.729157  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | I0127 02:53:31.729045  945293 retry.go:31] will retry after 275.135754ms: waiting for domain to come up
	I0127 02:53:32.005706  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:53:32.006437  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | unable to find current IP address of domain old-k8s-version-542356 in network mk-old-k8s-version-542356
	I0127 02:53:32.006469  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | I0127 02:53:32.006414  945293 retry.go:31] will retry after 355.59506ms: waiting for domain to come up
	I0127 02:53:32.363997  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:53:32.364627  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | unable to find current IP address of domain old-k8s-version-542356 in network mk-old-k8s-version-542356
	I0127 02:53:32.364660  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | I0127 02:53:32.364544  945293 retry.go:31] will retry after 393.566113ms: waiting for domain to come up
	I0127 02:53:32.760448  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:53:32.761216  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | unable to find current IP address of domain old-k8s-version-542356 in network mk-old-k8s-version-542356
	I0127 02:53:32.761241  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | I0127 02:53:32.761121  945293 retry.go:31] will retry after 584.426041ms: waiting for domain to come up
	I0127 02:53:33.346879  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:53:33.347715  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | unable to find current IP address of domain old-k8s-version-542356 in network mk-old-k8s-version-542356
	I0127 02:53:33.347738  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | I0127 02:53:33.347623  945293 retry.go:31] will retry after 520.895585ms: waiting for domain to come up
	I0127 02:53:33.870315  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:53:33.870835  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | unable to find current IP address of domain old-k8s-version-542356 in network mk-old-k8s-version-542356
	I0127 02:53:33.870904  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | I0127 02:53:33.870803  945293 retry.go:31] will retry after 576.924877ms: waiting for domain to come up
	I0127 02:53:34.450049  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:53:34.450629  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | unable to find current IP address of domain old-k8s-version-542356 in network mk-old-k8s-version-542356
	I0127 02:53:34.450659  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | I0127 02:53:34.450603  945293 retry.go:31] will retry after 1.183336679s: waiting for domain to come up
	I0127 02:53:35.635244  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:53:35.635731  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | unable to find current IP address of domain old-k8s-version-542356 in network mk-old-k8s-version-542356
	I0127 02:53:35.635757  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | I0127 02:53:35.635697  945293 retry.go:31] will retry after 1.476025852s: waiting for domain to come up
	I0127 02:53:37.114033  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:53:37.114631  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | unable to find current IP address of domain old-k8s-version-542356 in network mk-old-k8s-version-542356
	I0127 02:53:37.114664  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | I0127 02:53:37.114593  945293 retry.go:31] will retry after 1.40651394s: waiting for domain to come up
	I0127 02:53:38.523211  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:53:38.523788  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | unable to find current IP address of domain old-k8s-version-542356 in network mk-old-k8s-version-542356
	I0127 02:53:38.523810  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | I0127 02:53:38.523753  945293 retry.go:31] will retry after 1.787814568s: waiting for domain to come up
	I0127 02:53:40.312989  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:53:40.313445  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | unable to find current IP address of domain old-k8s-version-542356 in network mk-old-k8s-version-542356
	I0127 02:53:40.313519  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | I0127 02:53:40.313428  945293 retry.go:31] will retry after 2.760070418s: waiting for domain to come up
	I0127 02:53:43.075239  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:53:43.075847  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | unable to find current IP address of domain old-k8s-version-542356 in network mk-old-k8s-version-542356
	I0127 02:53:43.075876  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | I0127 02:53:43.075806  945293 retry.go:31] will retry after 3.233218224s: waiting for domain to come up
	I0127 02:53:46.310775  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:53:46.311227  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | unable to find current IP address of domain old-k8s-version-542356 in network mk-old-k8s-version-542356
	I0127 02:53:46.311254  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | I0127 02:53:46.311195  945293 retry.go:31] will retry after 3.961475695s: waiting for domain to come up
	I0127 02:53:50.274680  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:53:50.275146  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | unable to find current IP address of domain old-k8s-version-542356 in network mk-old-k8s-version-542356
	I0127 02:53:50.275187  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | I0127 02:53:50.275118  945293 retry.go:31] will retry after 5.672063581s: waiting for domain to come up
	I0127 02:53:55.948559  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:53:55.949178  945208 main.go:141] libmachine: (old-k8s-version-542356) found domain IP: 192.168.39.85
	I0127 02:53:55.949211  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has current primary IP address 192.168.39.85 and MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:53:55.949221  945208 main.go:141] libmachine: (old-k8s-version-542356) reserving static IP address...
	I0127 02:53:55.949572  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-542356", mac: "52:54:00:12:05:b8", ip: "192.168.39.85"} in network mk-old-k8s-version-542356
	I0127 02:53:56.024835  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | Getting to WaitForSSH function...
	I0127 02:53:56.024869  945208 main.go:141] libmachine: (old-k8s-version-542356) reserved static IP address 192.168.39.85 for domain old-k8s-version-542356
	I0127 02:53:56.024883  945208 main.go:141] libmachine: (old-k8s-version-542356) waiting for SSH...
	I0127 02:53:56.027983  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:53:56.028441  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:05:b8", ip: ""} in network mk-old-k8s-version-542356: {Iface:virbr1 ExpiryTime:2025-01-27 03:53:45 +0000 UTC Type:0 Mac:52:54:00:12:05:b8 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:minikube Clientid:01:52:54:00:12:05:b8}
	I0127 02:53:56.028470  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined IP address 192.168.39.85 and MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:53:56.028648  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | Using SSH client type: external
	I0127 02:53:56.028677  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | Using SSH private key: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/old-k8s-version-542356/id_rsa (-rw-------)
	I0127 02:53:56.028728  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.85 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20316-897624/.minikube/machines/old-k8s-version-542356/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 02:53:56.028741  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | About to run SSH command:
	I0127 02:53:56.028758  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | exit 0
	I0127 02:53:56.152888  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | SSH cmd err, output: <nil>: 
	I0127 02:53:56.153196  945208 main.go:141] libmachine: (old-k8s-version-542356) KVM machine creation complete
	I0127 02:53:56.153557  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetConfigRaw
	I0127 02:53:56.154154  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .DriverName
	I0127 02:53:56.154363  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .DriverName
	I0127 02:53:56.154530  945208 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0127 02:53:56.154545  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetState
	I0127 02:53:56.155760  945208 main.go:141] libmachine: Detecting operating system of created instance...
	I0127 02:53:56.155773  945208 main.go:141] libmachine: Waiting for SSH to be available...
	I0127 02:53:56.155778  945208 main.go:141] libmachine: Getting to WaitForSSH function...
	I0127 02:53:56.155783  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHHostname
	I0127 02:53:56.158256  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:53:56.158675  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:05:b8", ip: ""} in network mk-old-k8s-version-542356: {Iface:virbr1 ExpiryTime:2025-01-27 03:53:45 +0000 UTC Type:0 Mac:52:54:00:12:05:b8 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:old-k8s-version-542356 Clientid:01:52:54:00:12:05:b8}
	I0127 02:53:56.158703  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined IP address 192.168.39.85 and MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:53:56.158866  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHPort
	I0127 02:53:56.159046  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHKeyPath
	I0127 02:53:56.159248  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHKeyPath
	I0127 02:53:56.159412  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHUsername
	I0127 02:53:56.159601  945208 main.go:141] libmachine: Using SSH client type: native
	I0127 02:53:56.159842  945208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.85 22 <nil> <nil>}
	I0127 02:53:56.159855  945208 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0127 02:53:56.260011  945208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 02:53:56.260038  945208 main.go:141] libmachine: Detecting the provisioner...
	I0127 02:53:56.260048  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHHostname
	I0127 02:53:56.262881  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:53:56.263229  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:05:b8", ip: ""} in network mk-old-k8s-version-542356: {Iface:virbr1 ExpiryTime:2025-01-27 03:53:45 +0000 UTC Type:0 Mac:52:54:00:12:05:b8 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:old-k8s-version-542356 Clientid:01:52:54:00:12:05:b8}
	I0127 02:53:56.263271  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined IP address 192.168.39.85 and MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:53:56.263460  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHPort
	I0127 02:53:56.263666  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHKeyPath
	I0127 02:53:56.263841  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHKeyPath
	I0127 02:53:56.263989  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHUsername
	I0127 02:53:56.264149  945208 main.go:141] libmachine: Using SSH client type: native
	I0127 02:53:56.264377  945208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.85 22 <nil> <nil>}
	I0127 02:53:56.264391  945208 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0127 02:53:56.365368  945208 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0127 02:53:56.365447  945208 main.go:141] libmachine: found compatible host: buildroot
	I0127 02:53:56.365457  945208 main.go:141] libmachine: Provisioning with buildroot...
	I0127 02:53:56.365464  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetMachineName
	I0127 02:53:56.365728  945208 buildroot.go:166] provisioning hostname "old-k8s-version-542356"
	I0127 02:53:56.365793  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetMachineName
	I0127 02:53:56.365966  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHHostname
	I0127 02:53:56.369816  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:53:56.370214  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:05:b8", ip: ""} in network mk-old-k8s-version-542356: {Iface:virbr1 ExpiryTime:2025-01-27 03:53:45 +0000 UTC Type:0 Mac:52:54:00:12:05:b8 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:old-k8s-version-542356 Clientid:01:52:54:00:12:05:b8}
	I0127 02:53:56.370237  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined IP address 192.168.39.85 and MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:53:56.370432  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHPort
	I0127 02:53:56.370646  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHKeyPath
	I0127 02:53:56.370790  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHKeyPath
	I0127 02:53:56.370908  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHUsername
	I0127 02:53:56.371035  945208 main.go:141] libmachine: Using SSH client type: native
	I0127 02:53:56.371221  945208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.85 22 <nil> <nil>}
	I0127 02:53:56.371240  945208 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-542356 && echo "old-k8s-version-542356" | sudo tee /etc/hostname
	I0127 02:53:56.490225  945208 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-542356
	
	I0127 02:53:56.490266  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHHostname
	I0127 02:53:56.493340  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:53:56.493699  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:05:b8", ip: ""} in network mk-old-k8s-version-542356: {Iface:virbr1 ExpiryTime:2025-01-27 03:53:45 +0000 UTC Type:0 Mac:52:54:00:12:05:b8 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:old-k8s-version-542356 Clientid:01:52:54:00:12:05:b8}
	I0127 02:53:56.493740  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined IP address 192.168.39.85 and MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:53:56.493932  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHPort
	I0127 02:53:56.494141  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHKeyPath
	I0127 02:53:56.494310  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHKeyPath
	I0127 02:53:56.494491  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHUsername
	I0127 02:53:56.494632  945208 main.go:141] libmachine: Using SSH client type: native
	I0127 02:53:56.494806  945208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.85 22 <nil> <nil>}
	I0127 02:53:56.494822  945208 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-542356' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-542356/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-542356' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 02:53:56.605701  945208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 02:53:56.605750  945208 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20316-897624/.minikube CaCertPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20316-897624/.minikube}
	I0127 02:53:56.605789  945208 buildroot.go:174] setting up certificates
	I0127 02:53:56.605800  945208 provision.go:84] configureAuth start
	I0127 02:53:56.605814  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetMachineName
	I0127 02:53:56.606138  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetIP
	I0127 02:53:56.608980  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:53:56.609351  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:05:b8", ip: ""} in network mk-old-k8s-version-542356: {Iface:virbr1 ExpiryTime:2025-01-27 03:53:45 +0000 UTC Type:0 Mac:52:54:00:12:05:b8 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:old-k8s-version-542356 Clientid:01:52:54:00:12:05:b8}
	I0127 02:53:56.609375  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined IP address 192.168.39.85 and MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:53:56.609504  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHHostname
	I0127 02:53:56.611629  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:53:56.612060  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:05:b8", ip: ""} in network mk-old-k8s-version-542356: {Iface:virbr1 ExpiryTime:2025-01-27 03:53:45 +0000 UTC Type:0 Mac:52:54:00:12:05:b8 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:old-k8s-version-542356 Clientid:01:52:54:00:12:05:b8}
	I0127 02:53:56.612134  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined IP address 192.168.39.85 and MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:53:56.612211  945208 provision.go:143] copyHostCerts
	I0127 02:53:56.612273  945208 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-897624/.minikube/ca.pem, removing ...
	I0127 02:53:56.612293  945208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-897624/.minikube/ca.pem
	I0127 02:53:56.612350  945208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20316-897624/.minikube/ca.pem (1078 bytes)
	I0127 02:53:56.612488  945208 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-897624/.minikube/cert.pem, removing ...
	I0127 02:53:56.612506  945208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-897624/.minikube/cert.pem
	I0127 02:53:56.612531  945208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20316-897624/.minikube/cert.pem (1123 bytes)
	I0127 02:53:56.612596  945208 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-897624/.minikube/key.pem, removing ...
	I0127 02:53:56.612603  945208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-897624/.minikube/key.pem
	I0127 02:53:56.612621  945208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20316-897624/.minikube/key.pem (1679 bytes)
	I0127 02:53:56.612691  945208 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-542356 san=[127.0.0.1 192.168.39.85 localhost minikube old-k8s-version-542356]
	I0127 02:53:56.675497  945208 provision.go:177] copyRemoteCerts
	I0127 02:53:56.675569  945208 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 02:53:56.675596  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHHostname
	I0127 02:53:56.678431  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:53:56.678778  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:05:b8", ip: ""} in network mk-old-k8s-version-542356: {Iface:virbr1 ExpiryTime:2025-01-27 03:53:45 +0000 UTC Type:0 Mac:52:54:00:12:05:b8 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:old-k8s-version-542356 Clientid:01:52:54:00:12:05:b8}
	I0127 02:53:56.678811  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined IP address 192.168.39.85 and MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:53:56.678972  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHPort
	I0127 02:53:56.679163  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHKeyPath
	I0127 02:53:56.679303  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHUsername
	I0127 02:53:56.679455  945208 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/old-k8s-version-542356/id_rsa Username:docker}
	I0127 02:53:56.758628  945208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 02:53:56.782919  945208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0127 02:53:56.806768  945208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 02:53:56.830644  945208 provision.go:87] duration metric: took 224.826497ms to configureAuth
	I0127 02:53:56.830680  945208 buildroot.go:189] setting minikube options for container-runtime
	I0127 02:53:56.830901  945208 config.go:182] Loaded profile config "old-k8s-version-542356": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 02:53:56.830996  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHHostname
	I0127 02:53:56.833909  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:53:56.834221  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:05:b8", ip: ""} in network mk-old-k8s-version-542356: {Iface:virbr1 ExpiryTime:2025-01-27 03:53:45 +0000 UTC Type:0 Mac:52:54:00:12:05:b8 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:old-k8s-version-542356 Clientid:01:52:54:00:12:05:b8}
	I0127 02:53:56.834248  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined IP address 192.168.39.85 and MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:53:56.834435  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHPort
	I0127 02:53:56.834660  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHKeyPath
	I0127 02:53:56.834812  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHKeyPath
	I0127 02:53:56.834945  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHUsername
	I0127 02:53:56.835099  945208 main.go:141] libmachine: Using SSH client type: native
	I0127 02:53:56.835292  945208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.85 22 <nil> <nil>}
	I0127 02:53:56.835310  945208 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 02:53:57.055115  945208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 02:53:57.055149  945208 main.go:141] libmachine: Checking connection to Docker...
	I0127 02:53:57.055158  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetURL
	I0127 02:53:57.056510  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | using libvirt version 6000000
	I0127 02:53:57.059078  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:53:57.059428  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:05:b8", ip: ""} in network mk-old-k8s-version-542356: {Iface:virbr1 ExpiryTime:2025-01-27 03:53:45 +0000 UTC Type:0 Mac:52:54:00:12:05:b8 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:old-k8s-version-542356 Clientid:01:52:54:00:12:05:b8}
	I0127 02:53:57.059461  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined IP address 192.168.39.85 and MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:53:57.059600  945208 main.go:141] libmachine: Docker is up and running!
	I0127 02:53:57.059615  945208 main.go:141] libmachine: Reticulating splines...
	I0127 02:53:57.059623  945208 client.go:171] duration metric: took 27.151572592s to LocalClient.Create
	I0127 02:53:57.059647  945208 start.go:167] duration metric: took 27.151636437s to libmachine.API.Create "old-k8s-version-542356"
	I0127 02:53:57.059657  945208 start.go:293] postStartSetup for "old-k8s-version-542356" (driver="kvm2")
	I0127 02:53:57.059668  945208 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 02:53:57.059685  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .DriverName
	I0127 02:53:57.059951  945208 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 02:53:57.059981  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHHostname
	I0127 02:53:57.062263  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:53:57.062628  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:05:b8", ip: ""} in network mk-old-k8s-version-542356: {Iface:virbr1 ExpiryTime:2025-01-27 03:53:45 +0000 UTC Type:0 Mac:52:54:00:12:05:b8 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:old-k8s-version-542356 Clientid:01:52:54:00:12:05:b8}
	I0127 02:53:57.062674  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined IP address 192.168.39.85 and MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:53:57.062815  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHPort
	I0127 02:53:57.062988  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHKeyPath
	I0127 02:53:57.063159  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHUsername
	I0127 02:53:57.063302  945208 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/old-k8s-version-542356/id_rsa Username:docker}
	I0127 02:53:57.143136  945208 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 02:53:57.147078  945208 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 02:53:57.147110  945208 filesync.go:126] Scanning /home/jenkins/minikube-integration/20316-897624/.minikube/addons for local assets ...
	I0127 02:53:57.147181  945208 filesync.go:126] Scanning /home/jenkins/minikube-integration/20316-897624/.minikube/files for local assets ...
	I0127 02:53:57.147291  945208 filesync.go:149] local asset: /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem -> 9048892.pem in /etc/ssl/certs
	I0127 02:53:57.147394  945208 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 02:53:57.156298  945208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem --> /etc/ssl/certs/9048892.pem (1708 bytes)
	I0127 02:53:57.178014  945208 start.go:296] duration metric: took 118.341349ms for postStartSetup
	I0127 02:53:57.178084  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetConfigRaw
	I0127 02:53:57.178737  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetIP
	I0127 02:53:57.181300  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:53:57.181648  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:05:b8", ip: ""} in network mk-old-k8s-version-542356: {Iface:virbr1 ExpiryTime:2025-01-27 03:53:45 +0000 UTC Type:0 Mac:52:54:00:12:05:b8 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:old-k8s-version-542356 Clientid:01:52:54:00:12:05:b8}
	I0127 02:53:57.181675  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined IP address 192.168.39.85 and MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:53:57.181950  945208 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/old-k8s-version-542356/config.json ...
	I0127 02:53:57.182187  945208 start.go:128] duration metric: took 27.30045196s to createHost
	I0127 02:53:57.182214  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHHostname
	I0127 02:53:57.184690  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:53:57.185039  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:05:b8", ip: ""} in network mk-old-k8s-version-542356: {Iface:virbr1 ExpiryTime:2025-01-27 03:53:45 +0000 UTC Type:0 Mac:52:54:00:12:05:b8 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:old-k8s-version-542356 Clientid:01:52:54:00:12:05:b8}
	I0127 02:53:57.185098  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined IP address 192.168.39.85 and MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:53:57.185171  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHPort
	I0127 02:53:57.185362  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHKeyPath
	I0127 02:53:57.185511  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHKeyPath
	I0127 02:53:57.185633  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHUsername
	I0127 02:53:57.185796  945208 main.go:141] libmachine: Using SSH client type: native
	I0127 02:53:57.185958  945208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.85 22 <nil> <nil>}
	I0127 02:53:57.185971  945208 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 02:53:57.285802  945208 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737946437.263856841
	
	I0127 02:53:57.285836  945208 fix.go:216] guest clock: 1737946437.263856841
	I0127 02:53:57.285847  945208 fix.go:229] Guest: 2025-01-27 02:53:57.263856841 +0000 UTC Remote: 2025-01-27 02:53:57.182200571 +0000 UTC m=+36.631035117 (delta=81.65627ms)
	I0127 02:53:57.285876  945208 fix.go:200] guest clock delta is within tolerance: 81.65627ms
	I0127 02:53:57.285882  945208 start.go:83] releasing machines lock for "old-k8s-version-542356", held for 27.404350423s
	I0127 02:53:57.285910  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .DriverName
	I0127 02:53:57.286250  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetIP
	I0127 02:53:57.289294  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:53:57.289703  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:05:b8", ip: ""} in network mk-old-k8s-version-542356: {Iface:virbr1 ExpiryTime:2025-01-27 03:53:45 +0000 UTC Type:0 Mac:52:54:00:12:05:b8 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:old-k8s-version-542356 Clientid:01:52:54:00:12:05:b8}
	I0127 02:53:57.289735  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined IP address 192.168.39.85 and MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:53:57.289905  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .DriverName
	I0127 02:53:57.290429  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .DriverName
	I0127 02:53:57.290617  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .DriverName
	I0127 02:53:57.290728  945208 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 02:53:57.290770  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHHostname
	I0127 02:53:57.290824  945208 ssh_runner.go:195] Run: cat /version.json
	I0127 02:53:57.290840  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHHostname
	I0127 02:53:57.293664  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:53:57.293694  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:53:57.294089  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:05:b8", ip: ""} in network mk-old-k8s-version-542356: {Iface:virbr1 ExpiryTime:2025-01-27 03:53:45 +0000 UTC Type:0 Mac:52:54:00:12:05:b8 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:old-k8s-version-542356 Clientid:01:52:54:00:12:05:b8}
	I0127 02:53:57.294123  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined IP address 192.168.39.85 and MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:53:57.294160  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:05:b8", ip: ""} in network mk-old-k8s-version-542356: {Iface:virbr1 ExpiryTime:2025-01-27 03:53:45 +0000 UTC Type:0 Mac:52:54:00:12:05:b8 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:old-k8s-version-542356 Clientid:01:52:54:00:12:05:b8}
	I0127 02:53:57.294182  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined IP address 192.168.39.85 and MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:53:57.294284  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHPort
	I0127 02:53:57.294460  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHPort
	I0127 02:53:57.294498  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHKeyPath
	I0127 02:53:57.294609  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHKeyPath
	I0127 02:53:57.294692  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHUsername
	I0127 02:53:57.294750  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHUsername
	I0127 02:53:57.294840  945208 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/old-k8s-version-542356/id_rsa Username:docker}
	I0127 02:53:57.294875  945208 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/old-k8s-version-542356/id_rsa Username:docker}
	I0127 02:53:57.406388  945208 ssh_runner.go:195] Run: systemctl --version
	I0127 02:53:57.413210  945208 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 02:53:57.572016  945208 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 02:53:57.578530  945208 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 02:53:57.578613  945208 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 02:53:57.597012  945208 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 02:53:57.597039  945208 start.go:495] detecting cgroup driver to use...
	I0127 02:53:57.597119  945208 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 02:53:57.615700  945208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 02:53:57.629822  945208 docker.go:217] disabling cri-docker service (if available) ...
	I0127 02:53:57.629891  945208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 02:53:57.643486  945208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 02:53:57.658945  945208 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 02:53:57.797921  945208 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 02:53:57.936490  945208 docker.go:233] disabling docker service ...
	I0127 02:53:57.936588  945208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 02:53:57.950623  945208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 02:53:57.967715  945208 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 02:53:58.100741  945208 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 02:53:58.216171  945208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 02:53:58.229532  945208 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 02:53:58.248307  945208 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0127 02:53:58.248368  945208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 02:53:58.259097  945208 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 02:53:58.259188  945208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 02:53:58.270402  945208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 02:53:58.285283  945208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 02:53:58.299131  945208 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 02:53:58.313667  945208 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 02:53:58.324951  945208 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 02:53:58.325019  945208 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 02:53:58.343328  945208 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 02:53:58.353630  945208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 02:53:58.488694  945208 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 02:53:58.582033  945208 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 02:53:58.582117  945208 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 02:53:58.587774  945208 start.go:563] Will wait 60s for crictl version
	I0127 02:53:58.587883  945208 ssh_runner.go:195] Run: which crictl
	I0127 02:53:58.591804  945208 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 02:53:58.633889  945208 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 02:53:58.633981  945208 ssh_runner.go:195] Run: crio --version
	I0127 02:53:58.661257  945208 ssh_runner.go:195] Run: crio --version
	I0127 02:53:58.702046  945208 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0127 02:53:58.703422  945208 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetIP
	I0127 02:53:58.706858  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:53:58.707291  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:05:b8", ip: ""} in network mk-old-k8s-version-542356: {Iface:virbr1 ExpiryTime:2025-01-27 03:53:45 +0000 UTC Type:0 Mac:52:54:00:12:05:b8 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:old-k8s-version-542356 Clientid:01:52:54:00:12:05:b8}
	I0127 02:53:58.707331  945208 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined IP address 192.168.39.85 and MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:53:58.707599  945208 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0127 02:53:58.711650  945208 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 02:53:58.723701  945208 kubeadm.go:883] updating cluster {Name:old-k8s-version-542356 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-542356 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.85 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 02:53:58.723823  945208 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 02:53:58.723871  945208 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 02:53:58.754877  945208 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 02:53:58.754954  945208 ssh_runner.go:195] Run: which lz4
	I0127 02:53:58.759753  945208 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 02:53:58.764022  945208 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 02:53:58.764073  945208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0127 02:54:00.294475  945208 crio.go:462] duration metric: took 1.534757409s to copy over tarball
	I0127 02:54:00.294575  945208 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 02:54:02.897105  945208 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.602496544s)
	I0127 02:54:02.897145  945208 crio.go:469] duration metric: took 2.602633432s to extract the tarball
	I0127 02:54:02.897156  945208 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 02:54:02.940069  945208 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 02:54:02.985760  945208 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 02:54:02.985796  945208 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0127 02:54:02.985879  945208 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 02:54:02.985908  945208 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 02:54:02.985943  945208 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0127 02:54:02.985965  945208 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 02:54:02.985943  945208 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 02:54:02.985974  945208 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 02:54:02.986118  945208 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0127 02:54:02.986133  945208 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0127 02:54:02.987487  945208 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 02:54:02.987500  945208 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 02:54:02.987515  945208 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0127 02:54:02.987487  945208 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 02:54:02.987495  945208 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 02:54:02.987559  945208 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 02:54:02.987562  945208 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0127 02:54:02.987570  945208 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0127 02:54:03.177970  945208 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0127 02:54:03.187465  945208 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0127 02:54:03.207305  945208 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0127 02:54:03.216140  945208 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 02:54:03.218342  945208 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0127 02:54:03.239522  945208 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0127 02:54:03.245536  945208 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0127 02:54:03.257648  945208 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0127 02:54:03.257738  945208 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0127 02:54:03.257838  945208 ssh_runner.go:195] Run: which crictl
	I0127 02:54:03.308597  945208 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0127 02:54:03.308651  945208 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 02:54:03.308713  945208 ssh_runner.go:195] Run: which crictl
	I0127 02:54:03.321468  945208 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0127 02:54:03.321526  945208 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 02:54:03.321585  945208 ssh_runner.go:195] Run: which crictl
	I0127 02:54:03.367877  945208 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0127 02:54:03.367932  945208 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0127 02:54:03.367956  945208 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0127 02:54:03.368005  945208 ssh_runner.go:195] Run: which crictl
	I0127 02:54:03.367935  945208 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 02:54:03.368066  945208 ssh_runner.go:195] Run: which crictl
	I0127 02:54:03.367898  945208 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0127 02:54:03.368153  945208 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 02:54:03.368185  945208 ssh_runner.go:195] Run: which crictl
	I0127 02:54:03.376813  945208 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0127 02:54:03.376852  945208 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 02:54:03.376857  945208 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0127 02:54:03.376867  945208 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 02:54:03.376888  945208 ssh_runner.go:195] Run: which crictl
	I0127 02:54:03.376824  945208 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 02:54:03.376954  945208 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 02:54:03.376982  945208 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 02:54:03.376955  945208 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 02:54:03.513464  945208 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 02:54:03.513496  945208 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 02:54:03.513496  945208 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 02:54:03.513561  945208 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 02:54:03.513593  945208 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 02:54:03.513633  945208 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 02:54:03.513648  945208 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 02:54:03.665764  945208 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 02:54:03.665824  945208 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 02:54:03.665779  945208 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 02:54:03.665881  945208 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 02:54:03.665903  945208 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 02:54:03.665982  945208 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 02:54:03.666023  945208 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 02:54:03.814889  945208 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0127 02:54:03.814937  945208 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0127 02:54:03.814995  945208 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0127 02:54:03.815015  945208 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0127 02:54:03.815107  945208 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0127 02:54:03.815153  945208 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0127 02:54:03.815223  945208 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 02:54:03.847321  945208 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0127 02:54:04.275936  945208 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 02:54:04.424490  945208 cache_images.go:92] duration metric: took 1.438672925s to LoadCachedImages
	W0127 02:54:04.424592  945208 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0127 02:54:04.424610  945208 kubeadm.go:934] updating node { 192.168.39.85 8443 v1.20.0 crio true true} ...
	I0127 02:54:04.424742  945208 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-542356 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.85
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-542356 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 02:54:04.424841  945208 ssh_runner.go:195] Run: crio config
	I0127 02:54:04.487145  945208 cni.go:84] Creating CNI manager for ""
	I0127 02:54:04.487171  945208 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 02:54:04.487186  945208 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 02:54:04.487211  945208 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.85 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-542356 NodeName:old-k8s-version-542356 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.85"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.85 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0127 02:54:04.487396  945208 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.85
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-542356"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.85
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.85"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 02:54:04.487474  945208 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0127 02:54:04.497394  945208 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 02:54:04.497482  945208 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 02:54:04.506926  945208 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0127 02:54:04.524895  945208 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 02:54:04.541155  945208 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0127 02:54:04.559364  945208 ssh_runner.go:195] Run: grep 192.168.39.85	control-plane.minikube.internal$ /etc/hosts
	I0127 02:54:04.564162  945208 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.85	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 02:54:04.577404  945208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 02:54:04.703062  945208 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 02:54:04.719381  945208 certs.go:68] Setting up /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/old-k8s-version-542356 for IP: 192.168.39.85
	I0127 02:54:04.719412  945208 certs.go:194] generating shared ca certs ...
	I0127 02:54:04.719437  945208 certs.go:226] acquiring lock for ca certs: {Name:mk2a0a86c4d196cb3cdcd1c836209954479044e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:54:04.719628  945208 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20316-897624/.minikube/ca.key
	I0127 02:54:04.719687  945208 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20316-897624/.minikube/proxy-client-ca.key
	I0127 02:54:04.719702  945208 certs.go:256] generating profile certs ...
	I0127 02:54:04.719775  945208 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/old-k8s-version-542356/client.key
	I0127 02:54:04.719796  945208 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/old-k8s-version-542356/client.crt with IP's: []
	I0127 02:54:04.940153  945208 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/old-k8s-version-542356/client.crt ...
	I0127 02:54:04.940185  945208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/old-k8s-version-542356/client.crt: {Name:mk91f9f6c4890b70f204f4fd2166fa2bdcc4743d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:54:04.940365  945208 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/old-k8s-version-542356/client.key ...
	I0127 02:54:04.940377  945208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/old-k8s-version-542356/client.key: {Name:mk29192696aaf59b433b10bf5848ec37f46b68ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:54:04.940452  945208 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/old-k8s-version-542356/apiserver.key.4fcae880
	I0127 02:54:04.940468  945208 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/old-k8s-version-542356/apiserver.crt.4fcae880 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.85]
	I0127 02:54:05.063300  945208 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/old-k8s-version-542356/apiserver.crt.4fcae880 ...
	I0127 02:54:05.063337  945208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/old-k8s-version-542356/apiserver.crt.4fcae880: {Name:mkbd7ef4f80585fef7d77d14dd13bba21a6be5ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:54:05.063522  945208 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/old-k8s-version-542356/apiserver.key.4fcae880 ...
	I0127 02:54:05.063542  945208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/old-k8s-version-542356/apiserver.key.4fcae880: {Name:mkc1f0ae18825e2ba0a6ce052f6270c5e4883ba9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:54:05.063642  945208 certs.go:381] copying /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/old-k8s-version-542356/apiserver.crt.4fcae880 -> /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/old-k8s-version-542356/apiserver.crt
	I0127 02:54:05.063758  945208 certs.go:385] copying /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/old-k8s-version-542356/apiserver.key.4fcae880 -> /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/old-k8s-version-542356/apiserver.key
	I0127 02:54:05.063873  945208 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/old-k8s-version-542356/proxy-client.key
	I0127 02:54:05.063910  945208 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/old-k8s-version-542356/proxy-client.crt with IP's: []
	I0127 02:54:05.220246  945208 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/old-k8s-version-542356/proxy-client.crt ...
	I0127 02:54:05.220281  945208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/old-k8s-version-542356/proxy-client.crt: {Name:mk01cf1a85c9d0b24dada3be6ccd3bb04e7a4468 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:54:05.220446  945208 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/old-k8s-version-542356/proxy-client.key ...
	I0127 02:54:05.220462  945208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/old-k8s-version-542356/proxy-client.key: {Name:mk3ea28d69b7039ca85b274bd23b6a232a84e6cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:54:05.220630  945208 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/904889.pem (1338 bytes)
	W0127 02:54:05.220666  945208 certs.go:480] ignoring /home/jenkins/minikube-integration/20316-897624/.minikube/certs/904889_empty.pem, impossibly tiny 0 bytes
	I0127 02:54:05.220675  945208 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 02:54:05.220699  945208 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem (1078 bytes)
	I0127 02:54:05.220721  945208 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/cert.pem (1123 bytes)
	I0127 02:54:05.220743  945208 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/key.pem (1679 bytes)
	I0127 02:54:05.220781  945208 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem (1708 bytes)
	I0127 02:54:05.221496  945208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 02:54:05.252510  945208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 02:54:05.276893  945208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 02:54:05.299849  945208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 02:54:05.323366  945208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/old-k8s-version-542356/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0127 02:54:05.347180  945208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/old-k8s-version-542356/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 02:54:05.370140  945208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/old-k8s-version-542356/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 02:54:05.393566  945208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/old-k8s-version-542356/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 02:54:05.416877  945208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 02:54:05.445852  945208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/certs/904889.pem --> /usr/share/ca-certificates/904889.pem (1338 bytes)
	I0127 02:54:05.469467  945208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem --> /usr/share/ca-certificates/9048892.pem (1708 bytes)
	I0127 02:54:05.498512  945208 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 02:54:05.517876  945208 ssh_runner.go:195] Run: openssl version
	I0127 02:54:05.523562  945208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/904889.pem && ln -fs /usr/share/ca-certificates/904889.pem /etc/ssl/certs/904889.pem"
	I0127 02:54:05.537444  945208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/904889.pem
	I0127 02:54:05.550649  945208 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 01:57 /usr/share/ca-certificates/904889.pem
	I0127 02:54:05.550727  945208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/904889.pem
	I0127 02:54:05.558711  945208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/904889.pem /etc/ssl/certs/51391683.0"
	I0127 02:54:05.573535  945208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9048892.pem && ln -fs /usr/share/ca-certificates/9048892.pem /etc/ssl/certs/9048892.pem"
	I0127 02:54:05.594678  945208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9048892.pem
	I0127 02:54:05.600052  945208 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 01:57 /usr/share/ca-certificates/9048892.pem
	I0127 02:54:05.600123  945208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9048892.pem
	I0127 02:54:05.609181  945208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9048892.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 02:54:05.626249  945208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 02:54:05.638064  945208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 02:54:05.642442  945208 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 01:48 /usr/share/ca-certificates/minikubeCA.pem
	I0127 02:54:05.642512  945208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 02:54:05.648247  945208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 02:54:05.659554  945208 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 02:54:05.663556  945208 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 02:54:05.663625  945208 kubeadm.go:392] StartCluster: {Name:old-k8s-version-542356 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-542356 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.85 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 02:54:05.663750  945208 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 02:54:05.663823  945208 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 02:54:05.702831  945208 cri.go:89] found id: ""
	I0127 02:54:05.702917  945208 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 02:54:05.712917  945208 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 02:54:05.722975  945208 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 02:54:05.733012  945208 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 02:54:05.733040  945208 kubeadm.go:157] found existing configuration files:
	
	I0127 02:54:05.733107  945208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 02:54:05.743541  945208 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 02:54:05.743622  945208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 02:54:05.753712  945208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 02:54:05.762733  945208 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 02:54:05.762818  945208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 02:54:05.772096  945208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 02:54:05.782258  945208 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 02:54:05.783218  945208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 02:54:05.795710  945208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 02:54:05.805731  945208 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 02:54:05.805813  945208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 02:54:05.817249  945208 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 02:54:05.940146  945208 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 02:54:05.940254  945208 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 02:54:06.082799  945208 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 02:54:06.082956  945208 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 02:54:06.083097  945208 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 02:54:06.274159  945208 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 02:54:06.353021  945208 out.go:235]   - Generating certificates and keys ...
	I0127 02:54:06.353168  945208 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 02:54:06.353293  945208 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 02:54:06.445756  945208 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 02:54:06.701078  945208 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0127 02:54:06.935111  945208 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0127 02:54:06.984901  945208 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0127 02:54:07.174667  945208 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0127 02:54:07.174957  945208 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-542356] and IPs [192.168.39.85 127.0.0.1 ::1]
	I0127 02:54:07.302492  945208 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0127 02:54:07.302660  945208 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-542356] and IPs [192.168.39.85 127.0.0.1 ::1]
	I0127 02:54:07.580152  945208 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 02:54:07.774005  945208 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 02:54:08.141704  945208 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0127 02:54:08.142054  945208 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 02:54:08.202151  945208 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 02:54:08.397538  945208 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 02:54:08.619630  945208 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 02:54:08.799409  945208 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 02:54:08.817204  945208 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 02:54:08.817351  945208 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 02:54:08.817409  945208 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 02:54:08.961403  945208 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 02:54:08.963668  945208 out.go:235]   - Booting up control plane ...
	I0127 02:54:08.963859  945208 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 02:54:08.981904  945208 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 02:54:08.985580  945208 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 02:54:08.985744  945208 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 02:54:08.992291  945208 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 02:54:48.987845  945208 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 02:54:48.987963  945208 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 02:54:48.988184  945208 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 02:54:53.987978  945208 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 02:54:53.988185  945208 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 02:55:03.987625  945208 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 02:55:03.987912  945208 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 02:55:23.987912  945208 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 02:55:23.988137  945208 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 02:56:03.989391  945208 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 02:56:03.989686  945208 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 02:56:03.989708  945208 kubeadm.go:310] 
	I0127 02:56:03.989742  945208 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 02:56:03.989775  945208 kubeadm.go:310] 		timed out waiting for the condition
	I0127 02:56:03.989783  945208 kubeadm.go:310] 
	I0127 02:56:03.989812  945208 kubeadm.go:310] 	This error is likely caused by:
	I0127 02:56:03.989841  945208 kubeadm.go:310] 		- The kubelet is not running
	I0127 02:56:03.990004  945208 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 02:56:03.990022  945208 kubeadm.go:310] 
	I0127 02:56:03.990152  945208 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 02:56:03.990217  945208 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 02:56:03.990265  945208 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 02:56:03.990275  945208 kubeadm.go:310] 
	I0127 02:56:03.990390  945208 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 02:56:03.990497  945208 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 02:56:03.990507  945208 kubeadm.go:310] 
	I0127 02:56:03.990669  945208 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 02:56:03.990794  945208 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 02:56:03.990901  945208 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 02:56:03.991001  945208 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 02:56:03.991018  945208 kubeadm.go:310] 
	I0127 02:56:03.991966  945208 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 02:56:03.992095  945208 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 02:56:03.992185  945208 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0127 02:56:03.992348  945208 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-542356] and IPs [192.168.39.85 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-542356] and IPs [192.168.39.85 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-542356] and IPs [192.168.39.85 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-542356] and IPs [192.168.39.85 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0127 02:56:03.992403  945208 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 02:56:09.494462  945208 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.502017377s)
	I0127 02:56:09.494568  945208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 02:56:09.510202  945208 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 02:56:09.523224  945208 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 02:56:09.523255  945208 kubeadm.go:157] found existing configuration files:
	
	I0127 02:56:09.523325  945208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 02:56:09.533864  945208 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 02:56:09.533955  945208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 02:56:09.544465  945208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 02:56:09.554530  945208 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 02:56:09.554611  945208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 02:56:09.564642  945208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 02:56:09.574100  945208 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 02:56:09.574172  945208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 02:56:09.583657  945208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 02:56:09.592716  945208 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 02:56:09.592787  945208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 02:56:09.602414  945208 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 02:56:09.677305  945208 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 02:56:09.677412  945208 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 02:56:09.847516  945208 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 02:56:09.847663  945208 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 02:56:09.847781  945208 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 02:56:10.115467  945208 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 02:56:10.118150  945208 out.go:235]   - Generating certificates and keys ...
	I0127 02:56:10.118294  945208 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 02:56:10.118381  945208 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 02:56:10.118523  945208 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 02:56:10.118647  945208 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 02:56:10.118777  945208 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 02:56:10.118876  945208 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 02:56:10.118987  945208 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 02:56:10.119105  945208 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 02:56:10.119242  945208 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 02:56:10.119361  945208 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 02:56:10.119412  945208 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 02:56:10.119491  945208 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 02:56:10.268230  945208 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 02:56:10.410766  945208 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 02:56:10.628572  945208 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 02:56:10.805134  945208 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 02:56:10.824936  945208 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 02:56:10.826617  945208 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 02:56:10.826774  945208 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 02:56:11.035170  945208 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 02:56:11.037074  945208 out.go:235]   - Booting up control plane ...
	I0127 02:56:11.037217  945208 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 02:56:11.048652  945208 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 02:56:11.054731  945208 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 02:56:11.056793  945208 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 02:56:11.062183  945208 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 02:56:51.064669  945208 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 02:56:51.065350  945208 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 02:56:51.065586  945208 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 02:56:56.065887  945208 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 02:56:56.066195  945208 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 02:57:06.065762  945208 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 02:57:06.066052  945208 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 02:57:26.065278  945208 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 02:57:26.065523  945208 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 02:58:06.065093  945208 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 02:58:06.065418  945208 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 02:58:06.065448  945208 kubeadm.go:310] 
	I0127 02:58:06.065506  945208 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 02:58:06.065565  945208 kubeadm.go:310] 		timed out waiting for the condition
	I0127 02:58:06.065580  945208 kubeadm.go:310] 
	I0127 02:58:06.065630  945208 kubeadm.go:310] 	This error is likely caused by:
	I0127 02:58:06.065689  945208 kubeadm.go:310] 		- The kubelet is not running
	I0127 02:58:06.065812  945208 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 02:58:06.065838  945208 kubeadm.go:310] 
	I0127 02:58:06.065977  945208 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 02:58:06.066029  945208 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 02:58:06.066069  945208 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 02:58:06.066076  945208 kubeadm.go:310] 
	I0127 02:58:06.066200  945208 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 02:58:06.066341  945208 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 02:58:06.066365  945208 kubeadm.go:310] 
	I0127 02:58:06.066535  945208 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 02:58:06.066648  945208 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 02:58:06.066744  945208 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 02:58:06.066838  945208 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 02:58:06.066845  945208 kubeadm.go:310] 
	I0127 02:58:06.069232  945208 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 02:58:06.069342  945208 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 02:58:06.069416  945208 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0127 02:58:06.069509  945208 kubeadm.go:394] duration metric: took 4m0.405892675s to StartCluster
	I0127 02:58:06.069589  945208 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 02:58:06.069661  945208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 02:58:06.133405  945208 cri.go:89] found id: ""
	I0127 02:58:06.133439  945208 logs.go:282] 0 containers: []
	W0127 02:58:06.133454  945208 logs.go:284] No container was found matching "kube-apiserver"
	I0127 02:58:06.133464  945208 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 02:58:06.133534  945208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 02:58:06.195887  945208 cri.go:89] found id: ""
	I0127 02:58:06.195918  945208 logs.go:282] 0 containers: []
	W0127 02:58:06.195930  945208 logs.go:284] No container was found matching "etcd"
	I0127 02:58:06.195939  945208 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 02:58:06.196008  945208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 02:58:06.246173  945208 cri.go:89] found id: ""
	I0127 02:58:06.246202  945208 logs.go:282] 0 containers: []
	W0127 02:58:06.246211  945208 logs.go:284] No container was found matching "coredns"
	I0127 02:58:06.246219  945208 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 02:58:06.246279  945208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 02:58:06.299049  945208 cri.go:89] found id: ""
	I0127 02:58:06.299090  945208 logs.go:282] 0 containers: []
	W0127 02:58:06.299102  945208 logs.go:284] No container was found matching "kube-scheduler"
	I0127 02:58:06.299110  945208 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 02:58:06.299194  945208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 02:58:06.346668  945208 cri.go:89] found id: ""
	I0127 02:58:06.346701  945208 logs.go:282] 0 containers: []
	W0127 02:58:06.346712  945208 logs.go:284] No container was found matching "kube-proxy"
	I0127 02:58:06.346721  945208 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 02:58:06.346787  945208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 02:58:06.398653  945208 cri.go:89] found id: ""
	I0127 02:58:06.398689  945208 logs.go:282] 0 containers: []
	W0127 02:58:06.398701  945208 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 02:58:06.398709  945208 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 02:58:06.398780  945208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 02:58:06.446386  945208 cri.go:89] found id: ""
	I0127 02:58:06.446417  945208 logs.go:282] 0 containers: []
	W0127 02:58:06.446429  945208 logs.go:284] No container was found matching "kindnet"
	I0127 02:58:06.446445  945208 logs.go:123] Gathering logs for kubelet ...
	I0127 02:58:06.446460  945208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 02:58:06.533445  945208 logs.go:123] Gathering logs for dmesg ...
	I0127 02:58:06.533497  945208 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 02:58:06.553857  945208 logs.go:123] Gathering logs for describe nodes ...
	I0127 02:58:06.553915  945208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 02:58:06.725767  945208 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 02:58:06.725797  945208 logs.go:123] Gathering logs for CRI-O ...
	I0127 02:58:06.725813  945208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 02:58:06.882237  945208 logs.go:123] Gathering logs for container status ...
	I0127 02:58:06.882292  945208 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0127 02:58:06.955915  945208 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0127 02:58:06.955987  945208 out.go:270] * 
	* 
	W0127 02:58:06.956059  945208 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 02:58:06.956082  945208 out.go:270] * 
	* 
	W0127 02:58:06.957358  945208 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0127 02:58:06.961146  945208 out.go:201] 
	W0127 02:58:06.962236  945208 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 02:58:06.962526  945208 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0127 02:58:06.962595  945208 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0127 02:58:06.963975  945208 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-542356 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-542356 -n old-k8s-version-542356
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-542356 -n old-k8s-version-542356: exit status 6 (267.195899ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 02:58:07.283205  947779 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-542356" does not appear in /home/jenkins/minikube-integration/20316-897624/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-542356" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (286.76s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (1621.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-844432 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p no-preload-844432 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: signal: killed (26m59.784253743s)

                                                
                                                
-- stdout --
	* [no-preload-844432] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20316
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20316-897624/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-897624/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "no-preload-844432" primary control-plane node in "no-preload-844432" cluster
	* Restarting existing kvm2 VM for "no-preload-844432" ...
	* Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-844432 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 02:56:49.965204  947047 out.go:345] Setting OutFile to fd 1 ...
	I0127 02:56:49.965322  947047 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:56:49.965331  947047 out.go:358] Setting ErrFile to fd 2...
	I0127 02:56:49.965335  947047 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:56:49.965521  947047 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-897624/.minikube/bin
	I0127 02:56:49.966070  947047 out.go:352] Setting JSON to false
	I0127 02:56:49.967064  947047 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":13153,"bootTime":1737933457,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 02:56:49.967187  947047 start.go:139] virtualization: kvm guest
	I0127 02:56:49.969149  947047 out.go:177] * [no-preload-844432] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 02:56:49.970342  947047 out.go:177]   - MINIKUBE_LOCATION=20316
	I0127 02:56:49.970365  947047 notify.go:220] Checking for updates...
	I0127 02:56:49.972397  947047 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 02:56:49.973529  947047 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20316-897624/kubeconfig
	I0127 02:56:49.974559  947047 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-897624/.minikube
	I0127 02:56:49.975590  947047 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 02:56:49.977017  947047 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 02:56:49.978660  947047 config.go:182] Loaded profile config "no-preload-844432": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 02:56:49.979030  947047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 02:56:49.979094  947047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:56:49.994127  947047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46039
	I0127 02:56:49.994554  947047 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:56:49.995158  947047 main.go:141] libmachine: Using API Version  1
	I0127 02:56:49.995182  947047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:56:49.995493  947047 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:56:49.995691  947047 main.go:141] libmachine: (no-preload-844432) Calling .DriverName
	I0127 02:56:49.995946  947047 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 02:56:49.996235  947047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 02:56:49.996272  947047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:56:50.011630  947047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34157
	I0127 02:56:50.012133  947047 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:56:50.012609  947047 main.go:141] libmachine: Using API Version  1
	I0127 02:56:50.012632  947047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:56:50.012971  947047 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:56:50.013162  947047 main.go:141] libmachine: (no-preload-844432) Calling .DriverName
	I0127 02:56:50.047972  947047 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 02:56:50.049214  947047 start.go:297] selected driver: kvm2
	I0127 02:56:50.049229  947047 start.go:901] validating driver "kvm2" against &{Name:no-preload-844432 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-844432 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.144 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 02:56:50.049333  947047 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 02:56:50.049972  947047 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 02:56:50.050053  947047 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20316-897624/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 02:56:50.064870  947047 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 02:56:50.065418  947047 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 02:56:50.065460  947047 cni.go:84] Creating CNI manager for ""
	I0127 02:56:50.065543  947047 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 02:56:50.065616  947047 start.go:340] cluster config:
	{Name:no-preload-844432 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-844432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.144 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 02:56:50.065749  947047 iso.go:125] acquiring lock: {Name:mkbbe08fa3daa2372045069667a0a52e0e34abd9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 02:56:50.067308  947047 out.go:177] * Starting "no-preload-844432" primary control-plane node in "no-preload-844432" cluster
	I0127 02:56:50.068292  947047 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 02:56:50.068427  947047 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/no-preload-844432/config.json ...
	I0127 02:56:50.068502  947047 cache.go:107] acquiring lock: {Name:mke8eeb9611e2f119578f8cb618f0e4d0f2311e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 02:56:50.068511  947047 cache.go:107] acquiring lock: {Name:mkbec78111dcd5c003ec4abde1c38e4ef0e9885a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 02:56:50.068576  947047 cache.go:107] acquiring lock: {Name:mk215459aaf1c0d6262bd87fad3b0240a6546934 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 02:56:50.068608  947047 cache.go:115] /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0127 02:56:50.068536  947047 cache.go:107] acquiring lock: {Name:mkb905fd58c1479cba219a03bb8ca8f1be0056bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 02:56:50.068621  947047 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 126.465µs
	I0127 02:56:50.068640  947047 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0127 02:56:50.068649  947047 cache.go:115] /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0127 02:56:50.068637  947047 cache.go:115] /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1 exists
	I0127 02:56:50.068628  947047 cache.go:107] acquiring lock: {Name:mk7df1ed1a10d74fd0ade768d409fd63042bdb84 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 02:56:50.068661  947047 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 135.389µs
	I0127 02:56:50.068665  947047 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.32.1" -> "/home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1" took 162.035µs
	I0127 02:56:50.068673  947047 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0127 02:56:50.068665  947047 cache.go:115] /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1 exists
	I0127 02:56:50.068691  947047 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.32.1" -> "/home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1" took 118.148µs
	I0127 02:56:50.068703  947047 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.32.1 -> /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1 succeeded
	I0127 02:56:50.068675  947047 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.32.1 -> /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1 succeeded
	I0127 02:56:50.068699  947047 start.go:360] acquireMachinesLock for no-preload-844432: {Name:mke71211974c740845fb9b00041df14f4e3ecd74 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 02:56:50.068730  947047 cache.go:115] /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1 exists
	I0127 02:56:50.068700  947047 cache.go:107] acquiring lock: {Name:mk1d6b2eb8087aae4d8bac8209aae24f4ed3a2c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 02:56:50.068742  947047 start.go:364] duration metric: took 25.182µs to acquireMachinesLock for "no-preload-844432"
	I0127 02:56:50.068710  947047 cache.go:107] acquiring lock: {Name:mke96d24832abf574cb2c4cff70e11103ae550d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 02:56:50.068746  947047 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.32.1" -> "/home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1" took 161.447µs
	I0127 02:56:50.068762  947047 start.go:96] Skipping create...Using existing machine configuration
	I0127 02:56:50.068770  947047 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.32.1 -> /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1 succeeded
	I0127 02:56:50.068774  947047 fix.go:54] fixHost starting: 
	I0127 02:56:50.068820  947047 cache.go:115] /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0127 02:56:50.068826  947047 cache.go:115] /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 exists
	I0127 02:56:50.068831  947047 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 171.45µs
	I0127 02:56:50.068843  947047 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0127 02:56:50.068834  947047 cache.go:96] cache image "registry.k8s.io/etcd:3.5.16-0" -> "/home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0" took 182.26µs
	I0127 02:56:50.068860  947047 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.16-0 -> /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 succeeded
	I0127 02:56:50.068951  947047 cache.go:107] acquiring lock: {Name:mk24680a72f84e13f0cf5c8826562620b020ef51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 02:56:50.069036  947047 cache.go:115] /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1 exists
	I0127 02:56:50.069048  947047 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.32.1" -> "/home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1" took 158.385µs
	I0127 02:56:50.069057  947047 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.32.1 -> /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1 succeeded
	I0127 02:56:50.069067  947047 cache.go:87] Successfully saved all images to host disk.
	I0127 02:56:50.069135  947047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 02:56:50.069172  947047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:56:50.083460  947047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37557
	I0127 02:56:50.083819  947047 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:56:50.084281  947047 main.go:141] libmachine: Using API Version  1
	I0127 02:56:50.084302  947047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:56:50.084632  947047 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:56:50.084855  947047 main.go:141] libmachine: (no-preload-844432) Calling .DriverName
	I0127 02:56:50.085040  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetState
	I0127 02:56:50.086528  947047 fix.go:112] recreateIfNeeded on no-preload-844432: state=Stopped err=<nil>
	I0127 02:56:50.086568  947047 main.go:141] libmachine: (no-preload-844432) Calling .DriverName
	W0127 02:56:50.086697  947047 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 02:56:50.088238  947047 out.go:177] * Restarting existing kvm2 VM for "no-preload-844432" ...
	I0127 02:56:50.089299  947047 main.go:141] libmachine: (no-preload-844432) Calling .Start
	I0127 02:56:50.089486  947047 main.go:141] libmachine: (no-preload-844432) starting domain...
	I0127 02:56:50.089504  947047 main.go:141] libmachine: (no-preload-844432) ensuring networks are active...
	I0127 02:56:50.090406  947047 main.go:141] libmachine: (no-preload-844432) Ensuring network default is active
	I0127 02:56:50.090709  947047 main.go:141] libmachine: (no-preload-844432) Ensuring network mk-no-preload-844432 is active
	I0127 02:56:50.091087  947047 main.go:141] libmachine: (no-preload-844432) getting domain XML...
	I0127 02:56:50.091814  947047 main.go:141] libmachine: (no-preload-844432) creating domain...
	I0127 02:56:51.308994  947047 main.go:141] libmachine: (no-preload-844432) waiting for IP...
	I0127 02:56:51.309912  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has defined MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 02:56:51.310424  947047 main.go:141] libmachine: (no-preload-844432) DBG | unable to find current IP address of domain no-preload-844432 in network mk-no-preload-844432
	I0127 02:56:51.310526  947047 main.go:141] libmachine: (no-preload-844432) DBG | I0127 02:56:51.310390  947082 retry.go:31] will retry after 221.086462ms: waiting for domain to come up
	I0127 02:56:51.532889  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has defined MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 02:56:51.533434  947047 main.go:141] libmachine: (no-preload-844432) DBG | unable to find current IP address of domain no-preload-844432 in network mk-no-preload-844432
	I0127 02:56:51.533462  947047 main.go:141] libmachine: (no-preload-844432) DBG | I0127 02:56:51.533393  947082 retry.go:31] will retry after 315.883942ms: waiting for domain to come up
	I0127 02:56:51.850972  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has defined MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 02:56:51.851447  947047 main.go:141] libmachine: (no-preload-844432) DBG | unable to find current IP address of domain no-preload-844432 in network mk-no-preload-844432
	I0127 02:56:51.851472  947047 main.go:141] libmachine: (no-preload-844432) DBG | I0127 02:56:51.851425  947082 retry.go:31] will retry after 399.352393ms: waiting for domain to come up
	I0127 02:56:52.252572  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has defined MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 02:56:52.253061  947047 main.go:141] libmachine: (no-preload-844432) DBG | unable to find current IP address of domain no-preload-844432 in network mk-no-preload-844432
	I0127 02:56:52.253095  947047 main.go:141] libmachine: (no-preload-844432) DBG | I0127 02:56:52.253024  947082 retry.go:31] will retry after 388.645265ms: waiting for domain to come up
	I0127 02:56:52.643578  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has defined MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 02:56:52.644125  947047 main.go:141] libmachine: (no-preload-844432) DBG | unable to find current IP address of domain no-preload-844432 in network mk-no-preload-844432
	I0127 02:56:52.644158  947047 main.go:141] libmachine: (no-preload-844432) DBG | I0127 02:56:52.644079  947082 retry.go:31] will retry after 684.225012ms: waiting for domain to come up
	I0127 02:56:53.329856  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has defined MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 02:56:53.330454  947047 main.go:141] libmachine: (no-preload-844432) DBG | unable to find current IP address of domain no-preload-844432 in network mk-no-preload-844432
	I0127 02:56:53.330485  947047 main.go:141] libmachine: (no-preload-844432) DBG | I0127 02:56:53.330419  947082 retry.go:31] will retry after 689.82866ms: waiting for domain to come up
	I0127 02:56:54.022456  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has defined MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 02:56:54.022954  947047 main.go:141] libmachine: (no-preload-844432) DBG | unable to find current IP address of domain no-preload-844432 in network mk-no-preload-844432
	I0127 02:56:54.022979  947047 main.go:141] libmachine: (no-preload-844432) DBG | I0127 02:56:54.022937  947082 retry.go:31] will retry after 940.786247ms: waiting for domain to come up
	I0127 02:56:54.965044  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has defined MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 02:56:54.965496  947047 main.go:141] libmachine: (no-preload-844432) DBG | unable to find current IP address of domain no-preload-844432 in network mk-no-preload-844432
	I0127 02:56:54.965522  947047 main.go:141] libmachine: (no-preload-844432) DBG | I0127 02:56:54.965492  947082 retry.go:31] will retry after 1.19776844s: waiting for domain to come up
	I0127 02:56:56.164531  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has defined MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 02:56:56.165068  947047 main.go:141] libmachine: (no-preload-844432) DBG | unable to find current IP address of domain no-preload-844432 in network mk-no-preload-844432
	I0127 02:56:56.165106  947047 main.go:141] libmachine: (no-preload-844432) DBG | I0127 02:56:56.165046  947082 retry.go:31] will retry after 1.531853413s: waiting for domain to come up
	I0127 02:56:57.698062  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has defined MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 02:56:57.698503  947047 main.go:141] libmachine: (no-preload-844432) DBG | unable to find current IP address of domain no-preload-844432 in network mk-no-preload-844432
	I0127 02:56:57.698546  947047 main.go:141] libmachine: (no-preload-844432) DBG | I0127 02:56:57.698492  947082 retry.go:31] will retry after 1.425269577s: waiting for domain to come up
	I0127 02:56:59.126268  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has defined MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 02:56:59.126764  947047 main.go:141] libmachine: (no-preload-844432) DBG | unable to find current IP address of domain no-preload-844432 in network mk-no-preload-844432
	I0127 02:56:59.126793  947047 main.go:141] libmachine: (no-preload-844432) DBG | I0127 02:56:59.126722  947082 retry.go:31] will retry after 2.841726722s: waiting for domain to come up
	I0127 02:57:01.970330  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has defined MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 02:57:01.970891  947047 main.go:141] libmachine: (no-preload-844432) DBG | unable to find current IP address of domain no-preload-844432 in network mk-no-preload-844432
	I0127 02:57:01.970959  947047 main.go:141] libmachine: (no-preload-844432) DBG | I0127 02:57:01.970864  947082 retry.go:31] will retry after 2.53994854s: waiting for domain to come up
	I0127 02:57:04.513635  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has defined MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 02:57:04.514082  947047 main.go:141] libmachine: (no-preload-844432) DBG | unable to find current IP address of domain no-preload-844432 in network mk-no-preload-844432
	I0127 02:57:04.514105  947047 main.go:141] libmachine: (no-preload-844432) DBG | I0127 02:57:04.514050  947082 retry.go:31] will retry after 3.47458119s: waiting for domain to come up
	I0127 02:57:07.992424  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has defined MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 02:57:07.992983  947047 main.go:141] libmachine: (no-preload-844432) found domain IP: 192.168.72.144
	I0127 02:57:07.993034  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has current primary IP address 192.168.72.144 and MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 02:57:07.993045  947047 main.go:141] libmachine: (no-preload-844432) reserving static IP address...
	I0127 02:57:07.993478  947047 main.go:141] libmachine: (no-preload-844432) DBG | found host DHCP lease matching {name: "no-preload-844432", mac: "52:54:00:17:d7:60", ip: "192.168.72.144"} in network mk-no-preload-844432: {Iface:virbr4 ExpiryTime:2025-01-27 03:57:00 +0000 UTC Type:0 Mac:52:54:00:17:d7:60 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:no-preload-844432 Clientid:01:52:54:00:17:d7:60}
	I0127 02:57:07.993510  947047 main.go:141] libmachine: (no-preload-844432) DBG | skip adding static IP to network mk-no-preload-844432 - found existing host DHCP lease matching {name: "no-preload-844432", mac: "52:54:00:17:d7:60", ip: "192.168.72.144"}
	I0127 02:57:07.993528  947047 main.go:141] libmachine: (no-preload-844432) reserved static IP address 192.168.72.144 for domain no-preload-844432
	I0127 02:57:07.993541  947047 main.go:141] libmachine: (no-preload-844432) waiting for SSH...
	I0127 02:57:07.993552  947047 main.go:141] libmachine: (no-preload-844432) DBG | Getting to WaitForSSH function...
	I0127 02:57:07.995887  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has defined MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 02:57:07.996231  947047 main.go:141] libmachine: (no-preload-844432) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:d7:60", ip: ""} in network mk-no-preload-844432: {Iface:virbr4 ExpiryTime:2025-01-27 03:57:00 +0000 UTC Type:0 Mac:52:54:00:17:d7:60 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:no-preload-844432 Clientid:01:52:54:00:17:d7:60}
	I0127 02:57:07.996272  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has defined IP address 192.168.72.144 and MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 02:57:07.996360  947047 main.go:141] libmachine: (no-preload-844432) DBG | Using SSH client type: external
	I0127 02:57:07.996390  947047 main.go:141] libmachine: (no-preload-844432) DBG | Using SSH private key: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/no-preload-844432/id_rsa (-rw-------)
	I0127 02:57:07.996424  947047 main.go:141] libmachine: (no-preload-844432) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20316-897624/.minikube/machines/no-preload-844432/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 02:57:07.996434  947047 main.go:141] libmachine: (no-preload-844432) DBG | About to run SSH command:
	I0127 02:57:07.996444  947047 main.go:141] libmachine: (no-preload-844432) DBG | exit 0
	I0127 02:57:08.117142  947047 main.go:141] libmachine: (no-preload-844432) DBG | SSH cmd err, output: <nil>: 
	I0127 02:57:08.117586  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetConfigRaw
	I0127 02:57:08.118272  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetIP
	I0127 02:57:08.121005  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has defined MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 02:57:08.121339  947047 main.go:141] libmachine: (no-preload-844432) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:d7:60", ip: ""} in network mk-no-preload-844432: {Iface:virbr4 ExpiryTime:2025-01-27 03:57:00 +0000 UTC Type:0 Mac:52:54:00:17:d7:60 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:no-preload-844432 Clientid:01:52:54:00:17:d7:60}
	I0127 02:57:08.121388  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has defined IP address 192.168.72.144 and MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 02:57:08.121589  947047 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/no-preload-844432/config.json ...
	I0127 02:57:08.121845  947047 machine.go:93] provisionDockerMachine start ...
	I0127 02:57:08.121871  947047 main.go:141] libmachine: (no-preload-844432) Calling .DriverName
	I0127 02:57:08.122113  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHHostname
	I0127 02:57:08.124403  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has defined MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 02:57:08.124773  947047 main.go:141] libmachine: (no-preload-844432) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:d7:60", ip: ""} in network mk-no-preload-844432: {Iface:virbr4 ExpiryTime:2025-01-27 03:57:00 +0000 UTC Type:0 Mac:52:54:00:17:d7:60 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:no-preload-844432 Clientid:01:52:54:00:17:d7:60}
	I0127 02:57:08.124826  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has defined IP address 192.168.72.144 and MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 02:57:08.124943  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHPort
	I0127 02:57:08.125129  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHKeyPath
	I0127 02:57:08.125353  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHKeyPath
	I0127 02:57:08.125515  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHUsername
	I0127 02:57:08.125716  947047 main.go:141] libmachine: Using SSH client type: native
	I0127 02:57:08.125973  947047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I0127 02:57:08.125989  947047 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 02:57:08.225140  947047 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 02:57:08.225169  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetMachineName
	I0127 02:57:08.225434  947047 buildroot.go:166] provisioning hostname "no-preload-844432"
	I0127 02:57:08.225462  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetMachineName
	I0127 02:57:08.225665  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHHostname
	I0127 02:57:08.228340  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has defined MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 02:57:08.228676  947047 main.go:141] libmachine: (no-preload-844432) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:d7:60", ip: ""} in network mk-no-preload-844432: {Iface:virbr4 ExpiryTime:2025-01-27 03:57:00 +0000 UTC Type:0 Mac:52:54:00:17:d7:60 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:no-preload-844432 Clientid:01:52:54:00:17:d7:60}
	I0127 02:57:08.228706  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has defined IP address 192.168.72.144 and MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 02:57:08.228812  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHPort
	I0127 02:57:08.228997  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHKeyPath
	I0127 02:57:08.229189  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHKeyPath
	I0127 02:57:08.229336  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHUsername
	I0127 02:57:08.229539  947047 main.go:141] libmachine: Using SSH client type: native
	I0127 02:57:08.229765  947047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I0127 02:57:08.229783  947047 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-844432 && echo "no-preload-844432" | sudo tee /etc/hostname
	I0127 02:57:08.342988  947047 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-844432
	
	I0127 02:57:08.343022  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHHostname
	I0127 02:57:08.346323  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has defined MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 02:57:08.346751  947047 main.go:141] libmachine: (no-preload-844432) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:d7:60", ip: ""} in network mk-no-preload-844432: {Iface:virbr4 ExpiryTime:2025-01-27 03:57:00 +0000 UTC Type:0 Mac:52:54:00:17:d7:60 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:no-preload-844432 Clientid:01:52:54:00:17:d7:60}
	I0127 02:57:08.346790  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has defined IP address 192.168.72.144 and MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 02:57:08.347008  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHPort
	I0127 02:57:08.347199  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHKeyPath
	I0127 02:57:08.347393  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHKeyPath
	I0127 02:57:08.347549  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHUsername
	I0127 02:57:08.347700  947047 main.go:141] libmachine: Using SSH client type: native
	I0127 02:57:08.347878  947047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I0127 02:57:08.347896  947047 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-844432' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-844432/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-844432' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 02:57:08.453911  947047 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 02:57:08.453946  947047 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20316-897624/.minikube CaCertPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20316-897624/.minikube}
	I0127 02:57:08.453979  947047 buildroot.go:174] setting up certificates
	I0127 02:57:08.453988  947047 provision.go:84] configureAuth start
	I0127 02:57:08.453999  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetMachineName
	I0127 02:57:08.454316  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetIP
	I0127 02:57:08.457117  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has defined MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 02:57:08.457457  947047 main.go:141] libmachine: (no-preload-844432) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:d7:60", ip: ""} in network mk-no-preload-844432: {Iface:virbr4 ExpiryTime:2025-01-27 03:57:00 +0000 UTC Type:0 Mac:52:54:00:17:d7:60 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:no-preload-844432 Clientid:01:52:54:00:17:d7:60}
	I0127 02:57:08.457485  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has defined IP address 192.168.72.144 and MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 02:57:08.457668  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHHostname
	I0127 02:57:08.460112  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has defined MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 02:57:08.460461  947047 main.go:141] libmachine: (no-preload-844432) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:d7:60", ip: ""} in network mk-no-preload-844432: {Iface:virbr4 ExpiryTime:2025-01-27 03:57:00 +0000 UTC Type:0 Mac:52:54:00:17:d7:60 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:no-preload-844432 Clientid:01:52:54:00:17:d7:60}
	I0127 02:57:08.460510  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has defined IP address 192.168.72.144 and MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 02:57:08.460662  947047 provision.go:143] copyHostCerts
	I0127 02:57:08.460725  947047 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-897624/.minikube/ca.pem, removing ...
	I0127 02:57:08.460747  947047 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-897624/.minikube/ca.pem
	I0127 02:57:08.460817  947047 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20316-897624/.minikube/ca.pem (1078 bytes)
	I0127 02:57:08.460992  947047 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-897624/.minikube/cert.pem, removing ...
	I0127 02:57:08.461002  947047 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-897624/.minikube/cert.pem
	I0127 02:57:08.461031  947047 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20316-897624/.minikube/cert.pem (1123 bytes)
	I0127 02:57:08.461114  947047 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-897624/.minikube/key.pem, removing ...
	I0127 02:57:08.461122  947047 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-897624/.minikube/key.pem
	I0127 02:57:08.461157  947047 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20316-897624/.minikube/key.pem (1679 bytes)
	I0127 02:57:08.461222  947047 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca-key.pem org=jenkins.no-preload-844432 san=[127.0.0.1 192.168.72.144 localhost minikube no-preload-844432]
	I0127 02:57:08.611904  947047 provision.go:177] copyRemoteCerts
	I0127 02:57:08.611970  947047 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 02:57:08.611996  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHHostname
	I0127 02:57:08.614817  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has defined MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 02:57:08.615133  947047 main.go:141] libmachine: (no-preload-844432) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:d7:60", ip: ""} in network mk-no-preload-844432: {Iface:virbr4 ExpiryTime:2025-01-27 03:57:00 +0000 UTC Type:0 Mac:52:54:00:17:d7:60 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:no-preload-844432 Clientid:01:52:54:00:17:d7:60}
	I0127 02:57:08.615165  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has defined IP address 192.168.72.144 and MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 02:57:08.615361  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHPort
	I0127 02:57:08.615564  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHKeyPath
	I0127 02:57:08.615716  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHUsername
	I0127 02:57:08.615846  947047 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/no-preload-844432/id_rsa Username:docker}
	I0127 02:57:08.695821  947047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 02:57:08.720512  947047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0127 02:57:08.743953  947047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 02:57:08.766637  947047 provision.go:87] duration metric: took 312.632451ms to configureAuth
	I0127 02:57:08.766677  947047 buildroot.go:189] setting minikube options for container-runtime
	I0127 02:57:08.766938  947047 config.go:182] Loaded profile config "no-preload-844432": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 02:57:08.767029  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHHostname
	I0127 02:57:08.769842  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has defined MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 02:57:08.770174  947047 main.go:141] libmachine: (no-preload-844432) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:d7:60", ip: ""} in network mk-no-preload-844432: {Iface:virbr4 ExpiryTime:2025-01-27 03:57:00 +0000 UTC Type:0 Mac:52:54:00:17:d7:60 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:no-preload-844432 Clientid:01:52:54:00:17:d7:60}
	I0127 02:57:08.770206  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has defined IP address 192.168.72.144 and MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 02:57:08.770370  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHPort
	I0127 02:57:08.770564  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHKeyPath
	I0127 02:57:08.770738  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHKeyPath
	I0127 02:57:08.770888  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHUsername
	I0127 02:57:08.771054  947047 main.go:141] libmachine: Using SSH client type: native
	I0127 02:57:08.771233  947047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I0127 02:57:08.771248  947047 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 02:57:08.984554  947047 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 02:57:08.984588  947047 machine.go:96] duration metric: took 862.724823ms to provisionDockerMachine
	I0127 02:57:08.984605  947047 start.go:293] postStartSetup for "no-preload-844432" (driver="kvm2")
	I0127 02:57:08.984620  947047 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 02:57:08.984649  947047 main.go:141] libmachine: (no-preload-844432) Calling .DriverName
	I0127 02:57:08.985031  947047 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 02:57:08.985074  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHHostname
	I0127 02:57:08.988081  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has defined MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 02:57:08.988507  947047 main.go:141] libmachine: (no-preload-844432) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:d7:60", ip: ""} in network mk-no-preload-844432: {Iface:virbr4 ExpiryTime:2025-01-27 03:57:00 +0000 UTC Type:0 Mac:52:54:00:17:d7:60 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:no-preload-844432 Clientid:01:52:54:00:17:d7:60}
	I0127 02:57:08.988536  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has defined IP address 192.168.72.144 and MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 02:57:08.988682  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHPort
	I0127 02:57:08.988911  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHKeyPath
	I0127 02:57:08.989093  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHUsername
	I0127 02:57:08.989226  947047 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/no-preload-844432/id_rsa Username:docker}
	I0127 02:57:09.067150  947047 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 02:57:09.071111  947047 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 02:57:09.071140  947047 filesync.go:126] Scanning /home/jenkins/minikube-integration/20316-897624/.minikube/addons for local assets ...
	I0127 02:57:09.071214  947047 filesync.go:126] Scanning /home/jenkins/minikube-integration/20316-897624/.minikube/files for local assets ...
	I0127 02:57:09.071309  947047 filesync.go:149] local asset: /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem -> 9048892.pem in /etc/ssl/certs
	I0127 02:57:09.071428  947047 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 02:57:09.080823  947047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem --> /etc/ssl/certs/9048892.pem (1708 bytes)
	I0127 02:57:09.102839  947047 start.go:296] duration metric: took 118.208433ms for postStartSetup
	I0127 02:57:09.102895  947047 fix.go:56] duration metric: took 19.034121729s for fixHost
	I0127 02:57:09.102922  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHHostname
	I0127 02:57:09.105602  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has defined MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 02:57:09.105914  947047 main.go:141] libmachine: (no-preload-844432) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:d7:60", ip: ""} in network mk-no-preload-844432: {Iface:virbr4 ExpiryTime:2025-01-27 03:57:00 +0000 UTC Type:0 Mac:52:54:00:17:d7:60 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:no-preload-844432 Clientid:01:52:54:00:17:d7:60}
	I0127 02:57:09.105936  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has defined IP address 192.168.72.144 and MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 02:57:09.106100  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHPort
	I0127 02:57:09.106295  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHKeyPath
	I0127 02:57:09.106462  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHKeyPath
	I0127 02:57:09.106628  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHUsername
	I0127 02:57:09.106793  947047 main.go:141] libmachine: Using SSH client type: native
	I0127 02:57:09.107000  947047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I0127 02:57:09.107017  947047 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 02:57:09.205447  947047 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737946629.178913690
	
	I0127 02:57:09.205487  947047 fix.go:216] guest clock: 1737946629.178913690
	I0127 02:57:09.205497  947047 fix.go:229] Guest: 2025-01-27 02:57:09.17891369 +0000 UTC Remote: 2025-01-27 02:57:09.102899329 +0000 UTC m=+19.175264531 (delta=76.014361ms)
	I0127 02:57:09.205533  947047 fix.go:200] guest clock delta is within tolerance: 76.014361ms
	I0127 02:57:09.205541  947047 start.go:83] releasing machines lock for "no-preload-844432", held for 19.136791381s
	I0127 02:57:09.205564  947047 main.go:141] libmachine: (no-preload-844432) Calling .DriverName
	I0127 02:57:09.205813  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetIP
	I0127 02:57:09.208513  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has defined MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 02:57:09.208868  947047 main.go:141] libmachine: (no-preload-844432) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:d7:60", ip: ""} in network mk-no-preload-844432: {Iface:virbr4 ExpiryTime:2025-01-27 03:57:00 +0000 UTC Type:0 Mac:52:54:00:17:d7:60 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:no-preload-844432 Clientid:01:52:54:00:17:d7:60}
	I0127 02:57:09.208896  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has defined IP address 192.168.72.144 and MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 02:57:09.209079  947047 main.go:141] libmachine: (no-preload-844432) Calling .DriverName
	I0127 02:57:09.209627  947047 main.go:141] libmachine: (no-preload-844432) Calling .DriverName
	I0127 02:57:09.209819  947047 main.go:141] libmachine: (no-preload-844432) Calling .DriverName
	I0127 02:57:09.209922  947047 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 02:57:09.209974  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHHostname
	I0127 02:57:09.210051  947047 ssh_runner.go:195] Run: cat /version.json
	I0127 02:57:09.210076  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHHostname
	I0127 02:57:09.212764  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has defined MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 02:57:09.212845  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has defined MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 02:57:09.213127  947047 main.go:141] libmachine: (no-preload-844432) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:d7:60", ip: ""} in network mk-no-preload-844432: {Iface:virbr4 ExpiryTime:2025-01-27 03:57:00 +0000 UTC Type:0 Mac:52:54:00:17:d7:60 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:no-preload-844432 Clientid:01:52:54:00:17:d7:60}
	I0127 02:57:09.213157  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has defined IP address 192.168.72.144 and MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 02:57:09.213240  947047 main.go:141] libmachine: (no-preload-844432) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:d7:60", ip: ""} in network mk-no-preload-844432: {Iface:virbr4 ExpiryTime:2025-01-27 03:57:00 +0000 UTC Type:0 Mac:52:54:00:17:d7:60 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:no-preload-844432 Clientid:01:52:54:00:17:d7:60}
	I0127 02:57:09.213251  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHPort
	I0127 02:57:09.213277  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has defined IP address 192.168.72.144 and MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 02:57:09.213435  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHPort
	I0127 02:57:09.213475  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHKeyPath
	I0127 02:57:09.213572  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHKeyPath
	I0127 02:57:09.213641  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHUsername
	I0127 02:57:09.213724  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHUsername
	I0127 02:57:09.213795  947047 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/no-preload-844432/id_rsa Username:docker}
	I0127 02:57:09.213854  947047 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/no-preload-844432/id_rsa Username:docker}
	I0127 02:57:09.314692  947047 ssh_runner.go:195] Run: systemctl --version
	I0127 02:57:09.320481  947047 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 02:57:09.464150  947047 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 02:57:09.469969  947047 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 02:57:09.470067  947047 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 02:57:09.485515  947047 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 02:57:09.485543  947047 start.go:495] detecting cgroup driver to use...
	I0127 02:57:09.485614  947047 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 02:57:09.501729  947047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 02:57:09.515294  947047 docker.go:217] disabling cri-docker service (if available) ...
	I0127 02:57:09.515364  947047 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 02:57:09.528309  947047 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 02:57:09.541679  947047 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 02:57:09.658701  947047 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 02:57:09.826381  947047 docker.go:233] disabling docker service ...
	I0127 02:57:09.826476  947047 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 02:57:09.839830  947047 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 02:57:09.851896  947047 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 02:57:09.969401  947047 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 02:57:10.088020  947047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 02:57:10.101657  947047 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 02:57:10.118676  947047 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 02:57:10.118739  947047 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 02:57:10.128524  947047 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 02:57:10.128597  947047 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 02:57:10.138301  947047 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 02:57:10.149350  947047 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 02:57:10.161310  947047 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 02:57:10.172386  947047 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 02:57:10.183995  947047 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 02:57:10.200608  947047 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 02:57:10.210921  947047 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 02:57:10.220619  947047 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 02:57:10.220673  947047 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 02:57:10.233162  947047 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 02:57:10.242015  947047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 02:57:10.349606  947047 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 02:57:10.448118  947047 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 02:57:10.448211  947047 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 02:57:10.454276  947047 start.go:563] Will wait 60s for crictl version
	I0127 02:57:10.454339  947047 ssh_runner.go:195] Run: which crictl
	I0127 02:57:10.458087  947047 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 02:57:10.498341  947047 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 02:57:10.498438  947047 ssh_runner.go:195] Run: crio --version
	I0127 02:57:10.526671  947047 ssh_runner.go:195] Run: crio --version
	I0127 02:57:10.554482  947047 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 02:57:10.555927  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetIP
	I0127 02:57:10.558772  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has defined MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 02:57:10.559131  947047 main.go:141] libmachine: (no-preload-844432) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:d7:60", ip: ""} in network mk-no-preload-844432: {Iface:virbr4 ExpiryTime:2025-01-27 03:57:00 +0000 UTC Type:0 Mac:52:54:00:17:d7:60 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:no-preload-844432 Clientid:01:52:54:00:17:d7:60}
	I0127 02:57:10.559154  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has defined IP address 192.168.72.144 and MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 02:57:10.559379  947047 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0127 02:57:10.563473  947047 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 02:57:10.576580  947047 kubeadm.go:883] updating cluster {Name:no-preload-844432 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-844432 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.144 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 02:57:10.576694  947047 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 02:57:10.576728  947047 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 02:57:10.614319  947047 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0127 02:57:10.614347  947047 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.32.1 registry.k8s.io/kube-controller-manager:v1.32.1 registry.k8s.io/kube-scheduler:v1.32.1 registry.k8s.io/kube-proxy:v1.32.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.16-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0127 02:57:10.614424  947047 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.32.1
	I0127 02:57:10.614452  947047 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 02:57:10.614457  947047 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.16-0
	I0127 02:57:10.614479  947047 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0127 02:57:10.614507  947047 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.32.1
	I0127 02:57:10.614490  947047 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.32.1
	I0127 02:57:10.614549  947047 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0127 02:57:10.614527  947047 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.32.1
	I0127 02:57:10.616112  947047 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.32.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.32.1
	I0127 02:57:10.616128  947047 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.32.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.32.1
	I0127 02:57:10.616113  947047 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.16-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.16-0
	I0127 02:57:10.616168  947047 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.32.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.32.1
	I0127 02:57:10.616124  947047 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0127 02:57:10.616155  947047 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0127 02:57:10.616149  947047 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.32.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.32.1
	I0127 02:57:10.616214  947047 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 02:57:10.816262  947047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.32.1
	I0127 02:57:10.820982  947047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0127 02:57:10.838698  947047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0127 02:57:10.875265  947047 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.32.1" needs transfer: "registry.k8s.io/kube-proxy:v1.32.1" does not exist at hash "e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a" in container runtime
	I0127 02:57:10.875325  947047 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.32.1
	I0127 02:57:10.875370  947047 ssh_runner.go:195] Run: which crictl
	I0127 02:57:10.879404  947047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.16-0
	I0127 02:57:10.882234  947047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.32.1
	I0127 02:57:10.884613  947047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.32.1
	I0127 02:57:10.890432  947047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.32.1
	I0127 02:57:10.984520  947047 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0127 02:57:10.984575  947047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.1
	I0127 02:57:10.984581  947047 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0127 02:57:10.984714  947047 ssh_runner.go:195] Run: which crictl
	I0127 02:57:10.996101  947047 cache_images.go:116] "registry.k8s.io/etcd:3.5.16-0" needs transfer: "registry.k8s.io/etcd:3.5.16-0" does not exist at hash "a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc" in container runtime
	I0127 02:57:10.996169  947047 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.16-0
	I0127 02:57:10.996222  947047 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.32.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.32.1" does not exist at hash "2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1" in container runtime
	I0127 02:57:10.996268  947047 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.32.1
	I0127 02:57:10.996321  947047 ssh_runner.go:195] Run: which crictl
	I0127 02:57:10.996231  947047 ssh_runner.go:195] Run: which crictl
	I0127 02:57:10.996227  947047 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.32.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.32.1" does not exist at hash "019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35" in container runtime
	I0127 02:57:10.996459  947047 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.32.1
	I0127 02:57:10.996501  947047 ssh_runner.go:195] Run: which crictl
	I0127 02:57:11.011136  947047 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.32.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.32.1" does not exist at hash "95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a" in container runtime
	I0127 02:57:11.011195  947047 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.32.1
	I0127 02:57:11.011252  947047 ssh_runner.go:195] Run: which crictl
	I0127 02:57:11.037286  947047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0127 02:57:11.037326  947047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I0127 02:57:11.037451  947047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.1
	I0127 02:57:11.037462  947047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.1
	I0127 02:57:11.037497  947047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.1
	I0127 02:57:11.037508  947047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.1
	I0127 02:57:11.147101  947047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0127 02:57:11.149637  947047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I0127 02:57:11.176602  947047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.1
	I0127 02:57:11.176615  947047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.1
	I0127 02:57:11.176638  947047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.1
	I0127 02:57:11.176781  947047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.1
	I0127 02:57:11.215294  947047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0127 02:57:11.239538  947047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I0127 02:57:11.313038  947047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.1
	I0127 02:57:11.313092  947047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.1
	I0127 02:57:11.313038  947047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.1
	I0127 02:57:11.313135  947047 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1
	I0127 02:57:11.313225  947047 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.32.1
	I0127 02:57:11.346331  947047 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0
	I0127 02:57:11.346350  947047 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0127 02:57:11.346465  947047 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0127 02:57:11.346464  947047 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.16-0
	I0127 02:57:11.397303  947047 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1
	I0127 02:57:11.397437  947047 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.32.1
	I0127 02:57:11.406205  947047 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.32.1 (exists)
	I0127 02:57:11.406236  947047 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.32.1
	I0127 02:57:11.406287  947047 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.32.1
	I0127 02:57:11.406370  947047 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1
	I0127 02:57:11.406370  947047 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1
	I0127 02:57:11.406422  947047 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0127 02:57:11.406474  947047 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.32.1
	I0127 02:57:11.406497  947047 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.16-0 (exists)
	I0127 02:57:11.406498  947047 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.32.1
	I0127 02:57:11.406523  947047 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.32.1 (exists)
	I0127 02:57:11.830843  947047 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 02:57:13.283203  947047 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.32.1: (1.876698473s)
	I0127 02:57:13.283258  947047 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.32.1 (exists)
	I0127 02:57:13.283258  947047 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.452386212s)
	I0127 02:57:13.283300  947047 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0127 02:57:13.283225  947047 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.32.1: (1.876696801s)
	I0127 02:57:13.283373  947047 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.32.1 (exists)
	I0127 02:57:13.283219  947047 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.32.1: (1.876885178s)
	I0127 02:57:13.283386  947047 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1 from cache
	I0127 02:57:13.283405  947047 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0127 02:57:13.283341  947047 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 02:57:13.283451  947047 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0127 02:57:13.283486  947047 ssh_runner.go:195] Run: which crictl
	I0127 02:57:15.346197  947047 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.06271851s)
	I0127 02:57:15.346237  947047 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0127 02:57:15.346252  947047 ssh_runner.go:235] Completed: which crictl: (2.062739936s)
	I0127 02:57:15.346265  947047 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.16-0
	I0127 02:57:15.346322  947047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 02:57:15.346323  947047 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.16-0
	I0127 02:57:18.869091  947047 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.522726984s)
	I0127 02:57:18.869121  947047 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.16-0: (3.522693903s)
	I0127 02:57:18.869153  947047 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 from cache
	I0127 02:57:18.869184  947047 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.32.1
	I0127 02:57:18.869215  947047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 02:57:18.869223  947047 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.32.1
	I0127 02:57:20.746976  947047 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.32.1: (1.877727597s)
	I0127 02:57:20.747015  947047 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1 from cache
	I0127 02:57:20.747019  947047 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.877779631s)
	I0127 02:57:20.747043  947047 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.32.1
	I0127 02:57:20.747095  947047 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 02:57:20.747097  947047 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.32.1
	I0127 02:57:20.814058  947047 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0127 02:57:20.814198  947047 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0127 02:57:22.614963  947047 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.32.1: (1.867793181s)
	I0127 02:57:22.614980  947047 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.800751647s)
	I0127 02:57:22.614995  947047 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1 from cache
	I0127 02:57:22.615019  947047 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0127 02:57:22.615025  947047 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.32.1
	I0127 02:57:22.615084  947047 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.32.1
	I0127 02:57:24.573740  947047 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.32.1: (1.958621092s)
	I0127 02:57:24.573783  947047 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1 from cache
	I0127 02:57:24.573814  947047 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0127 02:57:24.573858  947047 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0127 02:57:25.423254  947047 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0127 02:57:25.423300  947047 cache_images.go:123] Successfully loaded all cached images
	I0127 02:57:25.423308  947047 cache_images.go:92] duration metric: took 14.808948361s to LoadCachedImages
	I0127 02:57:25.423328  947047 kubeadm.go:934] updating node { 192.168.72.144 8443 v1.32.1 crio true true} ...
	I0127 02:57:25.423473  947047 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-844432 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:no-preload-844432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 02:57:25.423559  947047 ssh_runner.go:195] Run: crio config
	I0127 02:57:25.470187  947047 cni.go:84] Creating CNI manager for ""
	I0127 02:57:25.470213  947047 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 02:57:25.470224  947047 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 02:57:25.470248  947047 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.144 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-844432 NodeName:no-preload-844432 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 02:57:25.470381  947047 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.144
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-844432"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.144"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.144"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 02:57:25.470449  947047 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 02:57:25.479927  947047 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 02:57:25.480019  947047 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 02:57:25.488657  947047 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0127 02:57:25.504174  947047 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 02:57:25.519462  947047 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I0127 02:57:25.534778  947047 ssh_runner.go:195] Run: grep 192.168.72.144	control-plane.minikube.internal$ /etc/hosts
	I0127 02:57:25.538431  947047 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 02:57:25.549225  947047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 02:57:25.682079  947047 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 02:57:25.698699  947047 certs.go:68] Setting up /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/no-preload-844432 for IP: 192.168.72.144
	I0127 02:57:25.698730  947047 certs.go:194] generating shared ca certs ...
	I0127 02:57:25.698754  947047 certs.go:226] acquiring lock for ca certs: {Name:mk2a0a86c4d196cb3cdcd1c836209954479044e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:57:25.698960  947047 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20316-897624/.minikube/ca.key
	I0127 02:57:25.699021  947047 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20316-897624/.minikube/proxy-client-ca.key
	I0127 02:57:25.699036  947047 certs.go:256] generating profile certs ...
	I0127 02:57:25.699153  947047 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/no-preload-844432/client.key
	I0127 02:57:25.699226  947047 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/no-preload-844432/apiserver.key.a0d1eaaa
	I0127 02:57:25.699280  947047 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/no-preload-844432/proxy-client.key
	I0127 02:57:25.699423  947047 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/904889.pem (1338 bytes)
	W0127 02:57:25.699462  947047 certs.go:480] ignoring /home/jenkins/minikube-integration/20316-897624/.minikube/certs/904889_empty.pem, impossibly tiny 0 bytes
	I0127 02:57:25.699476  947047 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 02:57:25.699511  947047 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem (1078 bytes)
	I0127 02:57:25.699542  947047 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/cert.pem (1123 bytes)
	I0127 02:57:25.699570  947047 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/key.pem (1679 bytes)
	I0127 02:57:25.699640  947047 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem (1708 bytes)
	I0127 02:57:25.700495  947047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 02:57:25.738158  947047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 02:57:25.771063  947047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 02:57:25.802205  947047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 02:57:25.827241  947047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/no-preload-844432/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0127 02:57:25.851085  947047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/no-preload-844432/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 02:57:25.884896  947047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/no-preload-844432/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 02:57:25.909103  947047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/no-preload-844432/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 02:57:25.933718  947047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 02:57:25.956372  947047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/certs/904889.pem --> /usr/share/ca-certificates/904889.pem (1338 bytes)
	I0127 02:57:25.978411  947047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem --> /usr/share/ca-certificates/9048892.pem (1708 bytes)
	I0127 02:57:26.000216  947047 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 02:57:26.015906  947047 ssh_runner.go:195] Run: openssl version
	I0127 02:57:26.021686  947047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 02:57:26.031653  947047 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 02:57:26.035933  947047 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 01:48 /usr/share/ca-certificates/minikubeCA.pem
	I0127 02:57:26.035988  947047 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 02:57:26.041513  947047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 02:57:26.051484  947047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/904889.pem && ln -fs /usr/share/ca-certificates/904889.pem /etc/ssl/certs/904889.pem"
	I0127 02:57:26.061508  947047 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/904889.pem
	I0127 02:57:26.066007  947047 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 01:57 /usr/share/ca-certificates/904889.pem
	I0127 02:57:26.066072  947047 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/904889.pem
	I0127 02:57:26.071271  947047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/904889.pem /etc/ssl/certs/51391683.0"
	I0127 02:57:26.081016  947047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9048892.pem && ln -fs /usr/share/ca-certificates/9048892.pem /etc/ssl/certs/9048892.pem"
	I0127 02:57:26.090745  947047 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9048892.pem
	I0127 02:57:26.095002  947047 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 01:57 /usr/share/ca-certificates/9048892.pem
	I0127 02:57:26.095062  947047 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9048892.pem
	I0127 02:57:26.100250  947047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9048892.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 02:57:26.109840  947047 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 02:57:26.114012  947047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 02:57:26.119453  947047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 02:57:26.124799  947047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 02:57:26.130222  947047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 02:57:26.135625  947047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 02:57:26.140835  947047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 02:57:26.146201  947047 kubeadm.go:392] StartCluster: {Name:no-preload-844432 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-844432 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.144 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 02:57:26.146323  947047 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 02:57:26.146376  947047 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 02:57:26.180405  947047 cri.go:89] found id: ""
	I0127 02:57:26.180482  947047 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 02:57:26.189741  947047 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 02:57:26.189765  947047 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 02:57:26.189812  947047 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 02:57:26.198309  947047 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 02:57:26.199038  947047 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-844432" does not appear in /home/jenkins/minikube-integration/20316-897624/kubeconfig
	I0127 02:57:26.199398  947047 kubeconfig.go:62] /home/jenkins/minikube-integration/20316-897624/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-844432" cluster setting kubeconfig missing "no-preload-844432" context setting]
	I0127 02:57:26.199966  947047 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/kubeconfig: {Name:mkcd87672341028e989830590061bc5270733978 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:57:26.201485  947047 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 02:57:26.210091  947047 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.144
	I0127 02:57:26.210121  947047 kubeadm.go:1160] stopping kube-system containers ...
	I0127 02:57:26.210142  947047 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0127 02:57:26.210220  947047 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 02:57:26.243866  947047 cri.go:89] found id: ""
	I0127 02:57:26.243947  947047 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 02:57:26.260078  947047 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 02:57:26.269277  947047 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 02:57:26.269298  947047 kubeadm.go:157] found existing configuration files:
	
	I0127 02:57:26.269356  947047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 02:57:26.277481  947047 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 02:57:26.277545  947047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 02:57:26.286294  947047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 02:57:26.294423  947047 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 02:57:26.294484  947047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 02:57:26.302894  947047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 02:57:26.310981  947047 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 02:57:26.311030  947047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 02:57:26.319337  947047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 02:57:26.327462  947047 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 02:57:26.327500  947047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 02:57:26.335970  947047 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 02:57:26.344430  947047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 02:57:26.450275  947047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 02:57:27.331600  947047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 02:57:27.521103  947047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 02:57:27.593351  947047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 02:57:27.662597  947047 api_server.go:52] waiting for apiserver process to appear ...
	I0127 02:57:27.662689  947047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 02:57:28.163096  947047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 02:57:28.663007  947047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 02:57:28.677760  947047 api_server.go:72] duration metric: took 1.015163187s to wait for apiserver process to appear ...
	I0127 02:57:28.677789  947047 api_server.go:88] waiting for apiserver healthz status ...
	I0127 02:57:28.677815  947047 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8443/healthz ...
	I0127 02:57:33.680952  947047 api_server.go:269] stopped: https://192.168.72.144:8443/healthz: Get "https://192.168.72.144:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0127 02:57:33.681014  947047 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8443/healthz ...
	I0127 02:57:38.683304  947047 api_server.go:269] stopped: https://192.168.72.144:8443/healthz: Get "https://192.168.72.144:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0127 02:57:38.683351  947047 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8443/healthz ...
	I0127 02:57:43.685517  947047 api_server.go:269] stopped: https://192.168.72.144:8443/healthz: Get "https://192.168.72.144:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0127 02:57:43.685574  947047 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8443/healthz ...
	I0127 02:57:48.685929  947047 api_server.go:269] stopped: https://192.168.72.144:8443/healthz: Get "https://192.168.72.144:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0127 02:57:48.686019  947047 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8443/healthz ...
	I0127 02:57:49.258923  947047 api_server.go:269] stopped: https://192.168.72.144:8443/healthz: Get "https://192.168.72.144:8443/healthz": read tcp 192.168.72.1:36616->192.168.72.144:8443: read: connection reset by peer
	I0127 02:57:49.258997  947047 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8443/healthz ...
	I0127 02:57:49.259600  947047 api_server.go:269] stopped: https://192.168.72.144:8443/healthz: Get "https://192.168.72.144:8443/healthz": dial tcp 192.168.72.144:8443: connect: connection refused
	I0127 02:57:49.678037  947047 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8443/healthz ...
	I0127 02:57:49.678748  947047 api_server.go:269] stopped: https://192.168.72.144:8443/healthz: Get "https://192.168.72.144:8443/healthz": dial tcp 192.168.72.144:8443: connect: connection refused
	I0127 02:57:50.178261  947047 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8443/healthz ...
	I0127 02:57:55.178581  947047 api_server.go:269] stopped: https://192.168.72.144:8443/healthz: Get "https://192.168.72.144:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0127 02:57:55.178626  947047 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8443/healthz ...
	I0127 02:58:00.178919  947047 api_server.go:269] stopped: https://192.168.72.144:8443/healthz: Get "https://192.168.72.144:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0127 02:58:00.178975  947047 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8443/healthz ...
	I0127 02:58:05.180106  947047 api_server.go:269] stopped: https://192.168.72.144:8443/healthz: Get "https://192.168.72.144:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0127 02:58:05.180145  947047 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8443/healthz ...
	I0127 02:58:05.310204  947047 api_server.go:279] https://192.168.72.144:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 02:58:05.310255  947047 api_server.go:103] status: https://192.168.72.144:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 02:58:05.678843  947047 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8443/healthz ...
	I0127 02:58:05.686540  947047 api_server.go:279] https://192.168.72.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 02:58:05.686580  947047 api_server.go:103] status: https://192.168.72.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 02:58:06.178171  947047 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8443/healthz ...
	I0127 02:58:06.189252  947047 api_server.go:279] https://192.168.72.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 02:58:06.189303  947047 api_server.go:103] status: https://192.168.72.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 02:58:06.678026  947047 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8443/healthz ...
	I0127 02:58:06.698667  947047 api_server.go:279] https://192.168.72.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 02:58:06.698708  947047 api_server.go:103] status: https://192.168.72.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 02:58:07.178070  947047 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8443/healthz ...
	I0127 02:58:07.183518  947047 api_server.go:279] https://192.168.72.144:8443/healthz returned 200:
	ok
	I0127 02:58:07.192989  947047 api_server.go:141] control plane version: v1.32.1
	I0127 02:58:07.193023  947047 api_server.go:131] duration metric: took 38.515226117s to wait for apiserver health ...
	I0127 02:58:07.193036  947047 cni.go:84] Creating CNI manager for ""
	I0127 02:58:07.193045  947047 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 02:58:07.195271  947047 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 02:58:07.196457  947047 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 02:58:07.209323  947047 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 02:58:07.230448  947047 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 02:58:07.259384  947047 system_pods.go:59] 8 kube-system pods found
	I0127 02:58:07.259430  947047 system_pods.go:61] "coredns-668d6bf9bc-584tb" [d02cf5fe-405c-40bd-8c06-80ac3b075f44] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 02:58:07.259441  947047 system_pods.go:61] "etcd-no-preload-844432" [a1ee791c-60ff-4498-a2b0-5aab729aa722] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 02:58:07.259453  947047 system_pods.go:61] "kube-apiserver-no-preload-844432" [e416df3d-a865-4947-bb2a-c8e710ef16c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 02:58:07.259462  947047 system_pods.go:61] "kube-controller-manager-no-preload-844432" [03fabb34-a319-4c46-87d6-e387e874fb69] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 02:58:07.259468  947047 system_pods.go:61] "kube-proxy-grpdg" [ebd0c08e-7405-4875-9140-2cea6258b961] Running
	I0127 02:58:07.259478  947047 system_pods.go:61] "kube-scheduler-no-preload-844432" [57332b8a-bb02-4698-82d8-7f865f6fb710] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 02:58:07.259489  947047 system_pods.go:61] "metrics-server-f79f97bbb-6mqdm" [4542d137-da1d-47b1-8dfb-e6dc52126b59] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 02:58:07.259496  947047 system_pods.go:61] "storage-provisioner" [9f7b420b-2b99-427d-afd9-932218972c01] Running
	I0127 02:58:07.259506  947047 system_pods.go:74] duration metric: took 29.035576ms to wait for pod list to return data ...
	I0127 02:58:07.259519  947047 node_conditions.go:102] verifying NodePressure condition ...
	I0127 02:58:07.271986  947047 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 02:58:07.272031  947047 node_conditions.go:123] node cpu capacity is 2
	I0127 02:58:07.272047  947047 node_conditions.go:105] duration metric: took 12.521928ms to run NodePressure ...
	I0127 02:58:07.272072  947047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 02:58:07.682684  947047 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0127 02:58:07.691849  947047 kubeadm.go:739] kubelet initialised
	I0127 02:58:07.691882  947047 kubeadm.go:740] duration metric: took 9.17052ms waiting for restarted kubelet to initialise ...
	I0127 02:58:07.691895  947047 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 02:58:07.699119  947047 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-584tb" in "kube-system" namespace to be "Ready" ...
	I0127 02:58:09.708610  947047 pod_ready.go:103] pod "coredns-668d6bf9bc-584tb" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:12.205669  947047 pod_ready.go:103] pod "coredns-668d6bf9bc-584tb" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:14.206849  947047 pod_ready.go:93] pod "coredns-668d6bf9bc-584tb" in "kube-system" namespace has status "Ready":"True"
	I0127 02:58:14.206876  947047 pod_ready.go:82] duration metric: took 6.507729247s for pod "coredns-668d6bf9bc-584tb" in "kube-system" namespace to be "Ready" ...
	I0127 02:58:14.206890  947047 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-844432" in "kube-system" namespace to be "Ready" ...
	I0127 02:58:14.212356  947047 pod_ready.go:93] pod "etcd-no-preload-844432" in "kube-system" namespace has status "Ready":"True"
	I0127 02:58:14.212379  947047 pod_ready.go:82] duration metric: took 5.481306ms for pod "etcd-no-preload-844432" in "kube-system" namespace to be "Ready" ...
	I0127 02:58:14.212392  947047 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-844432" in "kube-system" namespace to be "Ready" ...
	I0127 02:58:14.217565  947047 pod_ready.go:93] pod "kube-apiserver-no-preload-844432" in "kube-system" namespace has status "Ready":"True"
	I0127 02:58:14.217587  947047 pod_ready.go:82] duration metric: took 5.186426ms for pod "kube-apiserver-no-preload-844432" in "kube-system" namespace to be "Ready" ...
	I0127 02:58:14.217600  947047 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-844432" in "kube-system" namespace to be "Ready" ...
	I0127 02:58:16.224274  947047 pod_ready.go:103] pod "kube-controller-manager-no-preload-844432" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:17.223944  947047 pod_ready.go:93] pod "kube-controller-manager-no-preload-844432" in "kube-system" namespace has status "Ready":"True"
	I0127 02:58:17.223972  947047 pod_ready.go:82] duration metric: took 3.006363063s for pod "kube-controller-manager-no-preload-844432" in "kube-system" namespace to be "Ready" ...
	I0127 02:58:17.223987  947047 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-grpdg" in "kube-system" namespace to be "Ready" ...
	I0127 02:58:17.229175  947047 pod_ready.go:93] pod "kube-proxy-grpdg" in "kube-system" namespace has status "Ready":"True"
	I0127 02:58:17.229198  947047 pod_ready.go:82] duration metric: took 5.203537ms for pod "kube-proxy-grpdg" in "kube-system" namespace to be "Ready" ...
	I0127 02:58:17.229210  947047 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-844432" in "kube-system" namespace to be "Ready" ...
	I0127 02:58:17.234049  947047 pod_ready.go:93] pod "kube-scheduler-no-preload-844432" in "kube-system" namespace has status "Ready":"True"
	I0127 02:58:17.234074  947047 pod_ready.go:82] duration metric: took 4.85614ms for pod "kube-scheduler-no-preload-844432" in "kube-system" namespace to be "Ready" ...
	I0127 02:58:17.234087  947047 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace to be "Ready" ...
	I0127 02:58:19.240549  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:21.242076  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:23.757241  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:26.260468  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:28.740271  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:30.740497  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:32.740825  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:34.741370  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:37.240604  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:39.240742  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:41.241724  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:43.740479  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:45.740675  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:48.241104  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:50.739899  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:52.740667  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:55.240217  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:57.243894  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:59.740359  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:01.740491  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:04.242592  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:06.739517  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:08.739902  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:10.740509  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:13.240932  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:15.753068  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:18.240947  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:20.241407  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:22.740076  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:24.740823  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:27.240989  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:29.241554  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:31.740032  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:33.741185  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:36.240297  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:38.240696  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:40.241770  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:42.742214  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:45.240119  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:47.240319  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:49.240969  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:51.743896  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:54.240455  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:56.741095  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:58.741259  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:00.741378  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:03.239946  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:05.241010  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:07.741997  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:10.240376  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:12.241328  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:14.740249  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:17.239815  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:19.243264  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:21.740435  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:23.898335  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:26.241091  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:28.242226  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:30.251848  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:32.740963  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:34.742703  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:37.241626  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:39.739990  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:42.240334  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:44.240533  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:46.240940  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:48.241477  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:50.739176  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:52.739710  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:54.740487  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:56.740665  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:00:58.741057  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:00.741110  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:02.748416  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:05.241558  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:07.740591  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:09.741513  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:12.240058  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:14.240396  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:16.240760  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:18.740354  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:20.740614  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:23.240492  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:25.242114  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:27.243151  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:29.740819  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:31.740981  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:33.742131  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:36.241605  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:38.242178  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:40.741881  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:43.240794  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:45.241555  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:47.752008  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:50.240701  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:52.241003  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:54.740390  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:56.740491  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:01:58.744370  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:01.240785  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:03.739769  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:05.741665  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:08.240445  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:10.741391  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:12.741493  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:15.241377  947047 pod_ready.go:103] pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace has status "Ready":"False"
	I0127 03:02:17.235056  947047 pod_ready.go:82] duration metric: took 4m0.000937816s for pod "metrics-server-f79f97bbb-6mqdm" in "kube-system" namespace to be "Ready" ...
	E0127 03:02:17.235088  947047 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0127 03:02:17.235109  947047 pod_ready.go:39] duration metric: took 4m9.543201397s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:02:17.235141  947047 kubeadm.go:597] duration metric: took 4m51.045369992s to restartPrimaryControlPlane
	W0127 03:02:17.235217  947047 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 03:02:17.235246  947047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 03:02:44.949366  947047 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.71408098s)
	I0127 03:02:44.949471  947047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 03:02:44.969346  947047 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 03:02:44.986681  947047 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 03:02:45.001060  947047 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 03:02:45.001090  947047 kubeadm.go:157] found existing configuration files:
	
	I0127 03:02:45.001154  947047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 03:02:45.013568  947047 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 03:02:45.013643  947047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 03:02:45.035139  947047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 03:02:45.047379  947047 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 03:02:45.047453  947047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 03:02:45.064159  947047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 03:02:45.078334  947047 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 03:02:45.078409  947047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 03:02:45.098888  947047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 03:02:45.108304  947047 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 03:02:45.108377  947047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 03:02:45.117596  947047 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 03:02:45.173805  947047 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 03:02:45.173965  947047 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 03:02:45.288767  947047 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 03:02:45.288975  947047 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 03:02:45.289110  947047 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 03:02:45.301044  947047 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 03:02:45.303322  947047 out.go:235]   - Generating certificates and keys ...
	I0127 03:02:45.303439  947047 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 03:02:45.303532  947047 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 03:02:45.303666  947047 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 03:02:45.303760  947047 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 03:02:45.303856  947047 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 03:02:45.303922  947047 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 03:02:45.304005  947047 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 03:02:45.304087  947047 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 03:02:45.304676  947047 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 03:02:45.304799  947047 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 03:02:45.304859  947047 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 03:02:45.304969  947047 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 03:02:45.475219  947047 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 03:02:45.585607  947047 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 03:02:45.731196  947047 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 03:02:46.013377  947047 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 03:02:46.186513  947047 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 03:02:46.187171  947047 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 03:02:46.190790  947047 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 03:02:46.192239  947047 out.go:235]   - Booting up control plane ...
	I0127 03:02:46.192367  947047 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 03:02:46.192477  947047 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 03:02:46.192602  947047 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 03:02:46.213312  947047 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 03:02:46.220276  947047 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 03:02:46.220361  947047 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 03:02:46.373095  947047 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 03:02:46.373265  947047 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 03:02:46.886561  947047 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 513.057314ms
	I0127 03:02:46.886700  947047 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 03:02:52.390817  947047 kubeadm.go:310] [api-check] The API server is healthy after 5.502448094s
	I0127 03:02:52.404528  947047 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 03:02:52.422743  947047 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 03:02:52.458350  947047 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 03:02:52.458671  947047 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-844432 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 03:02:52.473460  947047 kubeadm.go:310] [bootstrap-token] Using token: np8s37.nupfbr19umvugceu
	I0127 03:02:52.474790  947047 out.go:235]   - Configuring RBAC rules ...
	I0127 03:02:52.474952  947047 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 03:02:52.480818  947047 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 03:02:52.487753  947047 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 03:02:52.494871  947047 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 03:02:52.497963  947047 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 03:02:52.501320  947047 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 03:02:52.799349  947047 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 03:02:53.227305  947047 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 03:02:53.796785  947047 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 03:02:53.798364  947047 kubeadm.go:310] 
	I0127 03:02:53.798466  947047 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 03:02:53.798479  947047 kubeadm.go:310] 
	I0127 03:02:53.798584  947047 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 03:02:53.798595  947047 kubeadm.go:310] 
	I0127 03:02:53.798632  947047 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 03:02:53.798713  947047 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 03:02:53.798793  947047 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 03:02:53.798804  947047 kubeadm.go:310] 
	I0127 03:02:53.798931  947047 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 03:02:53.798952  947047 kubeadm.go:310] 
	I0127 03:02:53.799019  947047 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 03:02:53.799029  947047 kubeadm.go:310] 
	I0127 03:02:53.799107  947047 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 03:02:53.799238  947047 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 03:02:53.799341  947047 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 03:02:53.799352  947047 kubeadm.go:310] 
	I0127 03:02:53.799472  947047 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 03:02:53.799576  947047 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 03:02:53.799588  947047 kubeadm.go:310] 
	I0127 03:02:53.799712  947047 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token np8s37.nupfbr19umvugceu \
	I0127 03:02:53.799886  947047 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9e03cefec8c3986c8071c28cf6fb7c7bbd45830c579dcdbae8e6ec1676091320 \
	I0127 03:02:53.799953  947047 kubeadm.go:310] 	--control-plane 
	I0127 03:02:53.799972  947047 kubeadm.go:310] 
	I0127 03:02:53.800090  947047 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 03:02:53.800101  947047 kubeadm.go:310] 
	I0127 03:02:53.800213  947047 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token np8s37.nupfbr19umvugceu \
	I0127 03:02:53.800341  947047 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9e03cefec8c3986c8071c28cf6fb7c7bbd45830c579dcdbae8e6ec1676091320 
	I0127 03:02:53.801117  947047 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 03:02:53.801175  947047 cni.go:84] Creating CNI manager for ""
	I0127 03:02:53.801194  947047 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 03:02:53.802971  947047 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 03:02:53.804156  947047 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 03:02:53.817463  947047 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 03:02:53.837629  947047 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 03:02:53.837736  947047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:53.837753  947047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-844432 minikube.k8s.io/updated_at=2025_01_27T03_02_53_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=6bb462d349d93b9bf1c5a4f87817e5e9ea11cc95 minikube.k8s.io/name=no-preload-844432 minikube.k8s.io/primary=true
	I0127 03:02:54.095986  947047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:54.095997  947047 ops.go:34] apiserver oom_adj: -16
	I0127 03:02:54.596996  947047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:55.096179  947047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:55.596097  947047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:56.096499  947047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:56.596146  947047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:57.096953  947047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:57.596165  947047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:58.096358  947047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:02:58.204014  947047 kubeadm.go:1113] duration metric: took 4.366321836s to wait for elevateKubeSystemPrivileges
	I0127 03:02:58.204059  947047 kubeadm.go:394] duration metric: took 5m32.057865579s to StartCluster
	I0127 03:02:58.204087  947047 settings.go:142] acquiring lock: {Name:mk329e04da29656a59534015ea2d4cccfe5debac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:02:58.204189  947047 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20316-897624/kubeconfig
	I0127 03:02:58.206460  947047 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/kubeconfig: {Name:mkcd87672341028e989830590061bc5270733978 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:02:58.206820  947047 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.144 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 03:02:58.206944  947047 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 03:02:58.207057  947047 addons.go:69] Setting storage-provisioner=true in profile "no-preload-844432"
	I0127 03:02:58.207080  947047 addons.go:238] Setting addon storage-provisioner=true in "no-preload-844432"
	I0127 03:02:58.207081  947047 addons.go:69] Setting default-storageclass=true in profile "no-preload-844432"
	I0127 03:02:58.207097  947047 addons.go:69] Setting metrics-server=true in profile "no-preload-844432"
	I0127 03:02:58.207099  947047 config.go:182] Loaded profile config "no-preload-844432": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 03:02:58.207105  947047 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-844432"
	I0127 03:02:58.207109  947047 addons.go:238] Setting addon metrics-server=true in "no-preload-844432"
	W0127 03:02:58.207116  947047 addons.go:247] addon metrics-server should already be in state true
	W0127 03:02:58.207088  947047 addons.go:247] addon storage-provisioner should already be in state true
	I0127 03:02:58.207147  947047 addons.go:69] Setting dashboard=true in profile "no-preload-844432"
	I0127 03:02:58.207183  947047 addons.go:238] Setting addon dashboard=true in "no-preload-844432"
	W0127 03:02:58.207197  947047 addons.go:247] addon dashboard should already be in state true
	I0127 03:02:58.207236  947047 host.go:66] Checking if "no-preload-844432" exists ...
	I0127 03:02:58.207156  947047 host.go:66] Checking if "no-preload-844432" exists ...
	I0127 03:02:58.207161  947047 host.go:66] Checking if "no-preload-844432" exists ...
	I0127 03:02:58.207576  947047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:02:58.207628  947047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:58.207647  947047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:02:58.207670  947047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:02:58.207701  947047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:58.207709  947047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:58.207715  947047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:02:58.207738  947047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:58.208590  947047 out.go:177] * Verifying Kubernetes components...
	I0127 03:02:58.210012  947047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 03:02:58.225534  947047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41127
	I0127 03:02:58.226280  947047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42263
	I0127 03:02:58.226302  947047 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:58.226284  947047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40237
	I0127 03:02:58.226948  947047 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:58.227018  947047 main.go:141] libmachine: Using API Version  1
	I0127 03:02:58.227031  947047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:58.227043  947047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33863
	I0127 03:02:58.227100  947047 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:58.227516  947047 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:58.227635  947047 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:58.227743  947047 main.go:141] libmachine: Using API Version  1
	I0127 03:02:58.227766  947047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:58.228115  947047 main.go:141] libmachine: Using API Version  1
	I0127 03:02:58.228140  947047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:58.228270  947047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:02:58.228314  947047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:58.228351  947047 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:58.228515  947047 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:58.228991  947047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:02:58.229030  947047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:58.229100  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetState
	I0127 03:02:58.229154  947047 main.go:141] libmachine: Using API Version  1
	I0127 03:02:58.229177  947047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:58.229584  947047 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:58.230148  947047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:02:58.230197  947047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:58.233162  947047 addons.go:238] Setting addon default-storageclass=true in "no-preload-844432"
	W0127 03:02:58.233183  947047 addons.go:247] addon default-storageclass should already be in state true
	I0127 03:02:58.233214  947047 host.go:66] Checking if "no-preload-844432" exists ...
	I0127 03:02:58.233565  947047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:02:58.233608  947047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:58.250019  947047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39495
	I0127 03:02:58.250066  947047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35821
	I0127 03:02:58.250498  947047 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:58.250654  947047 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:58.251044  947047 main.go:141] libmachine: Using API Version  1
	I0127 03:02:58.251061  947047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:58.251185  947047 main.go:141] libmachine: Using API Version  1
	I0127 03:02:58.251207  947047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:58.251257  947047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39061
	I0127 03:02:58.251645  947047 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:58.251698  947047 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:58.251890  947047 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:58.252000  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetState
	I0127 03:02:58.252356  947047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:02:58.252403  947047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:02:58.252475  947047 main.go:141] libmachine: Using API Version  1
	I0127 03:02:58.252492  947047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:58.252851  947047 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:58.253104  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetState
	I0127 03:02:58.254545  947047 main.go:141] libmachine: (no-preload-844432) Calling .DriverName
	I0127 03:02:58.255420  947047 main.go:141] libmachine: (no-preload-844432) Calling .DriverName
	I0127 03:02:58.257175  947047 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 03:02:58.257299  947047 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 03:02:58.258457  947047 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 03:02:58.258475  947047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 03:02:58.258492  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHHostname
	I0127 03:02:58.260141  947047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37203
	I0127 03:02:58.260402  947047 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 03:02:58.260627  947047 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:58.261411  947047 main.go:141] libmachine: Using API Version  1
	I0127 03:02:58.261429  947047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:58.261513  947047 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 03:02:58.261524  947047 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 03:02:58.261539  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHHostname
	I0127 03:02:58.262417  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has defined MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 03:02:58.262435  947047 main.go:141] libmachine: (no-preload-844432) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:d7:60", ip: ""} in network mk-no-preload-844432: {Iface:virbr4 ExpiryTime:2025-01-27 03:57:00 +0000 UTC Type:0 Mac:52:54:00:17:d7:60 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:no-preload-844432 Clientid:01:52:54:00:17:d7:60}
	I0127 03:02:58.262440  947047 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:58.262446  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has defined IP address 192.168.72.144 and MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 03:02:58.262840  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetState
	I0127 03:02:58.264367  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHPort
	I0127 03:02:58.264649  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHKeyPath
	I0127 03:02:58.264762  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHUsername
	I0127 03:02:58.264912  947047 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/no-preload-844432/id_rsa Username:docker}
	I0127 03:02:58.265306  947047 main.go:141] libmachine: (no-preload-844432) Calling .DriverName
	I0127 03:02:58.265741  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has defined MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 03:02:58.266241  947047 main.go:141] libmachine: (no-preload-844432) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:d7:60", ip: ""} in network mk-no-preload-844432: {Iface:virbr4 ExpiryTime:2025-01-27 03:57:00 +0000 UTC Type:0 Mac:52:54:00:17:d7:60 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:no-preload-844432 Clientid:01:52:54:00:17:d7:60}
	I0127 03:02:58.266320  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has defined IP address 192.168.72.144 and MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 03:02:58.266475  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHPort
	I0127 03:02:58.266738  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHKeyPath
	I0127 03:02:58.266921  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHUsername
	I0127 03:02:58.267065  947047 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/no-preload-844432/id_rsa Username:docker}
	I0127 03:02:58.267965  947047 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 03:02:58.269113  947047 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 03:02:58.269133  947047 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 03:02:58.269156  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHHostname
	I0127 03:02:58.272127  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has defined MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 03:02:58.272485  947047 main.go:141] libmachine: (no-preload-844432) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:d7:60", ip: ""} in network mk-no-preload-844432: {Iface:virbr4 ExpiryTime:2025-01-27 03:57:00 +0000 UTC Type:0 Mac:52:54:00:17:d7:60 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:no-preload-844432 Clientid:01:52:54:00:17:d7:60}
	I0127 03:02:58.272516  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has defined IP address 192.168.72.144 and MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 03:02:58.272570  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHPort
	I0127 03:02:58.272739  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHKeyPath
	I0127 03:02:58.272938  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHUsername
	I0127 03:02:58.273094  947047 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/no-preload-844432/id_rsa Username:docker}
	I0127 03:02:58.275072  947047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41195
	I0127 03:02:58.275639  947047 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:02:58.276259  947047 main.go:141] libmachine: Using API Version  1
	I0127 03:02:58.276276  947047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:02:58.276580  947047 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:02:58.276763  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetState
	I0127 03:02:58.278299  947047 main.go:141] libmachine: (no-preload-844432) Calling .DriverName
	I0127 03:02:58.278669  947047 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 03:02:58.278689  947047 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 03:02:58.278710  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHHostname
	I0127 03:02:58.281922  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has defined MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 03:02:58.282368  947047 main.go:141] libmachine: (no-preload-844432) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:d7:60", ip: ""} in network mk-no-preload-844432: {Iface:virbr4 ExpiryTime:2025-01-27 03:57:00 +0000 UTC Type:0 Mac:52:54:00:17:d7:60 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:no-preload-844432 Clientid:01:52:54:00:17:d7:60}
	I0127 03:02:58.282397  947047 main.go:141] libmachine: (no-preload-844432) DBG | domain no-preload-844432 has defined IP address 192.168.72.144 and MAC address 52:54:00:17:d7:60 in network mk-no-preload-844432
	I0127 03:02:58.282689  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHPort
	I0127 03:02:58.282941  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHKeyPath
	I0127 03:02:58.283143  947047 main.go:141] libmachine: (no-preload-844432) Calling .GetSSHUsername
	I0127 03:02:58.283298  947047 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/no-preload-844432/id_rsa Username:docker}
	I0127 03:02:58.504253  947047 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 03:02:58.535693  947047 node_ready.go:35] waiting up to 6m0s for node "no-preload-844432" to be "Ready" ...
	I0127 03:02:58.566557  947047 node_ready.go:49] node "no-preload-844432" has status "Ready":"True"
	I0127 03:02:58.566582  947047 node_ready.go:38] duration metric: took 30.841638ms for node "no-preload-844432" to be "Ready" ...
	I0127 03:02:58.566595  947047 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:02:58.572728  947047 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-4272c" in "kube-system" namespace to be "Ready" ...
	I0127 03:02:58.606720  947047 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 03:02:58.606752  947047 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 03:02:58.623425  947047 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 03:02:58.670985  947047 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 03:02:58.672024  947047 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 03:02:58.672051  947047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 03:02:58.683524  947047 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 03:02:58.683559  947047 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 03:02:58.713942  947047 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 03:02:58.713977  947047 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 03:02:58.742162  947047 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 03:02:58.742195  947047 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 03:02:58.779936  947047 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 03:02:58.779974  947047 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 03:02:58.813771  947047 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 03:02:58.813802  947047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 03:02:58.857266  947047 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 03:02:58.908792  947047 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 03:02:58.908828  947047 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 03:02:58.949460  947047 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 03:02:58.949496  947047 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 03:02:59.031855  947047 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 03:02:59.031898  947047 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 03:02:59.083547  947047 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 03:02:59.083582  947047 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 03:02:59.105278  947047 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 03:02:59.105309  947047 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 03:02:59.123538  947047 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 03:02:59.455917  947047 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:59.455964  947047 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:59.455990  947047 main.go:141] libmachine: (no-preload-844432) Calling .Close
	I0127 03:02:59.456045  947047 main.go:141] libmachine: (no-preload-844432) Calling .Close
	I0127 03:02:59.456433  947047 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:59.456452  947047 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:59.456463  947047 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:59.456471  947047 main.go:141] libmachine: (no-preload-844432) Calling .Close
	I0127 03:02:59.456523  947047 main.go:141] libmachine: (no-preload-844432) DBG | Closing plugin on server side
	I0127 03:02:59.456778  947047 main.go:141] libmachine: (no-preload-844432) DBG | Closing plugin on server side
	I0127 03:02:59.458539  947047 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:59.458559  947047 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:59.458567  947047 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:59.458574  947047 main.go:141] libmachine: (no-preload-844432) Calling .Close
	I0127 03:02:59.458603  947047 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:59.458621  947047 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:59.458827  947047 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:59.458846  947047 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:02:59.484545  947047 main.go:141] libmachine: Making call to close driver server
	I0127 03:02:59.484576  947047 main.go:141] libmachine: (no-preload-844432) Calling .Close
	I0127 03:02:59.484965  947047 main.go:141] libmachine: (no-preload-844432) DBG | Closing plugin on server side
	I0127 03:02:59.484977  947047 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:02:59.484995  947047 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:03:00.073237  947047 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.215924847s)
	I0127 03:03:00.073306  947047 main.go:141] libmachine: Making call to close driver server
	I0127 03:03:00.073322  947047 main.go:141] libmachine: (no-preload-844432) Calling .Close
	I0127 03:03:00.073677  947047 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:03:00.073682  947047 main.go:141] libmachine: (no-preload-844432) DBG | Closing plugin on server side
	I0127 03:03:00.073696  947047 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:03:00.073706  947047 main.go:141] libmachine: Making call to close driver server
	I0127 03:03:00.073713  947047 main.go:141] libmachine: (no-preload-844432) Calling .Close
	I0127 03:03:00.074064  947047 main.go:141] libmachine: (no-preload-844432) DBG | Closing plugin on server side
	I0127 03:03:00.074109  947047 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:03:00.074134  947047 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:03:00.074150  947047 addons.go:479] Verifying addon metrics-server=true in "no-preload-844432"
	I0127 03:03:00.587190  947047 pod_ready.go:103] pod "coredns-668d6bf9bc-4272c" in "kube-system" namespace has status "Ready":"False"
	I0127 03:03:01.296114  947047 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.172499261s)
	I0127 03:03:01.296214  947047 main.go:141] libmachine: Making call to close driver server
	I0127 03:03:01.296236  947047 main.go:141] libmachine: (no-preload-844432) Calling .Close
	I0127 03:03:01.296590  947047 main.go:141] libmachine: (no-preload-844432) DBG | Closing plugin on server side
	I0127 03:03:01.296633  947047 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:03:01.296648  947047 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:03:01.296665  947047 main.go:141] libmachine: Making call to close driver server
	I0127 03:03:01.296674  947047 main.go:141] libmachine: (no-preload-844432) Calling .Close
	I0127 03:03:01.297014  947047 main.go:141] libmachine: (no-preload-844432) DBG | Closing plugin on server side
	I0127 03:03:01.297057  947047 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:03:01.297074  947047 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:03:01.298981  947047 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-844432 addons enable metrics-server
	
	I0127 03:03:01.300574  947047 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 03:03:01.301627  947047 addons.go:514] duration metric: took 3.094692811s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 03:03:03.080292  947047 pod_ready.go:103] pod "coredns-668d6bf9bc-4272c" in "kube-system" namespace has status "Ready":"False"
	I0127 03:03:05.579953  947047 pod_ready.go:103] pod "coredns-668d6bf9bc-4272c" in "kube-system" namespace has status "Ready":"False"
	I0127 03:03:06.099426  947047 pod_ready.go:93] pod "coredns-668d6bf9bc-4272c" in "kube-system" namespace has status "Ready":"True"
	I0127 03:03:06.099511  947047 pod_ready.go:82] duration metric: took 7.526750309s for pod "coredns-668d6bf9bc-4272c" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:06.099534  947047 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-s258f" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:06.162649  947047 pod_ready.go:93] pod "coredns-668d6bf9bc-s258f" in "kube-system" namespace has status "Ready":"True"
	I0127 03:03:06.162677  947047 pod_ready.go:82] duration metric: took 63.133376ms for pod "coredns-668d6bf9bc-s258f" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:06.162693  947047 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-844432" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:07.168111  947047 pod_ready.go:93] pod "etcd-no-preload-844432" in "kube-system" namespace has status "Ready":"True"
	I0127 03:03:07.168145  947047 pod_ready.go:82] duration metric: took 1.005444556s for pod "etcd-no-preload-844432" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:07.168156  947047 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-844432" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:07.172723  947047 pod_ready.go:93] pod "kube-apiserver-no-preload-844432" in "kube-system" namespace has status "Ready":"True"
	I0127 03:03:07.172744  947047 pod_ready.go:82] duration metric: took 4.580541ms for pod "kube-apiserver-no-preload-844432" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:07.172753  947047 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-844432" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:07.180358  947047 pod_ready.go:93] pod "kube-controller-manager-no-preload-844432" in "kube-system" namespace has status "Ready":"True"
	I0127 03:03:07.180382  947047 pod_ready.go:82] duration metric: took 7.620629ms for pod "kube-controller-manager-no-preload-844432" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:07.180395  947047 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mglcq" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:07.277046  947047 pod_ready.go:93] pod "kube-proxy-mglcq" in "kube-system" namespace has status "Ready":"True"
	I0127 03:03:07.277074  947047 pod_ready.go:82] duration metric: took 96.670181ms for pod "kube-proxy-mglcq" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:07.277089  947047 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-844432" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:07.677294  947047 pod_ready.go:93] pod "kube-scheduler-no-preload-844432" in "kube-system" namespace has status "Ready":"True"
	I0127 03:03:07.677319  947047 pod_ready.go:82] duration metric: took 400.222715ms for pod "kube-scheduler-no-preload-844432" in "kube-system" namespace to be "Ready" ...
	I0127 03:03:07.677327  947047 pod_ready.go:39] duration metric: took 9.110719243s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:03:07.677344  947047 api_server.go:52] waiting for apiserver process to appear ...
	I0127 03:03:07.677405  947047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:03:07.698411  947047 api_server.go:72] duration metric: took 9.49154502s to wait for apiserver process to appear ...
	I0127 03:03:07.698443  947047 api_server.go:88] waiting for apiserver healthz status ...
	I0127 03:03:07.698466  947047 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8443/healthz ...
	I0127 03:03:07.715821  947047 api_server.go:279] https://192.168.72.144:8443/healthz returned 200:
	ok
	I0127 03:03:07.716790  947047 api_server.go:141] control plane version: v1.32.1
	I0127 03:03:07.716814  947047 api_server.go:131] duration metric: took 18.363368ms to wait for apiserver health ...
	I0127 03:03:07.716823  947047 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 03:03:07.879829  947047 system_pods.go:59] 9 kube-system pods found
	I0127 03:03:07.879858  947047 system_pods.go:61] "coredns-668d6bf9bc-4272c" [45e51dbd-1a87-409f-ba67-de135d11ca95] Running
	I0127 03:03:07.879863  947047 system_pods.go:61] "coredns-668d6bf9bc-s258f" [173e92a9-28a3-4228-bca0-d3ea898cbdd9] Running
	I0127 03:03:07.879867  947047 system_pods.go:61] "etcd-no-preload-844432" [f6d28339-9bf0-4d42-ad8a-21bb74725a77] Running
	I0127 03:03:07.879870  947047 system_pods.go:61] "kube-apiserver-no-preload-844432" [b45fca11-0262-4ae9-8a2c-429c1ff48a4b] Running
	I0127 03:03:07.879873  947047 system_pods.go:61] "kube-controller-manager-no-preload-844432" [cbc4393e-748d-4a77-9170-cd92c0ed3e00] Running
	I0127 03:03:07.879877  947047 system_pods.go:61] "kube-proxy-mglcq" [4213ed0d-6c87-4e46-ae2e-84e14b30e2d6] Running
	I0127 03:03:07.879880  947047 system_pods.go:61] "kube-scheduler-no-preload-844432" [98417e34-ae05-4b9e-b330-7f684ce56bcc] Running
	I0127 03:03:07.879885  947047 system_pods.go:61] "metrics-server-f79f97bbb-ml7kw" [ef8605e4-7be9-40a5-aa31-ee050f7a0f53] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 03:03:07.879888  947047 system_pods.go:61] "storage-provisioner" [1d82e829-9a50-43e5-adcc-73447bc7ebf4] Running
	I0127 03:03:07.879897  947047 system_pods.go:74] duration metric: took 163.066005ms to wait for pod list to return data ...
	I0127 03:03:07.879904  947047 default_sa.go:34] waiting for default service account to be created ...
	I0127 03:03:08.077355  947047 default_sa.go:45] found service account: "default"
	I0127 03:03:08.077388  947047 default_sa.go:55] duration metric: took 197.476885ms for default service account to be created ...
	I0127 03:03:08.077398  947047 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 03:03:08.280532  947047 system_pods.go:87] 9 kube-system pods found

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p no-preload-844432 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1": signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-844432 -n no-preload-844432
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-844432 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-844432 logs -n 25: (1.335508963s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-284111 sudo                                | bridge-284111          | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | systemctl status kubelet --all                       |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo                                | bridge-284111          | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | systemctl cat kubelet                                |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo                                | bridge-284111          | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | journalctl -xeu kubelet --all                        |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo cat                            | bridge-284111          | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                        |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo cat                            | bridge-284111          | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                        |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo                                | bridge-284111          | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC |                     |
	|         | systemctl status docker --all                        |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo                                | bridge-284111          | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | systemctl cat docker                                 |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo cat                            | bridge-284111          | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | /etc/docker/daemon.json                              |                        |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo docker                         | bridge-284111          | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC |                     |
	|         | system info                                          |                        |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo                                | bridge-284111          | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC |                     |
	|         | systemctl status cri-docker                          |                        |         |         |                     |                     |
	|         | --all --full --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo                                | bridge-284111          | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | systemctl cat cri-docker                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo cat                            | bridge-284111          | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                        |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo cat                            | bridge-284111          | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                        |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo                                | bridge-284111          | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | cri-dockerd --version                                |                        |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo                                | bridge-284111          | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC |                     |
	|         | systemctl status containerd                          |                        |         |         |                     |                     |
	|         | --all --full --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo                                | bridge-284111          | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | systemctl cat containerd                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo cat                            | bridge-284111          | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | /lib/systemd/system/containerd.service               |                        |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo cat                            | bridge-284111          | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | /etc/containerd/config.toml                          |                        |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo                                | bridge-284111          | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | containerd config dump                               |                        |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo                                | bridge-284111          | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | systemctl status crio --all                          |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo                                | bridge-284111          | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | systemctl cat crio --no-pager                        |                        |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo find                           | bridge-284111          | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                        |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo crio                           | bridge-284111          | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | config                                               |                        |         |         |                     |                     |
	| delete  | -p bridge-284111                                     | bridge-284111          | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	| delete  | -p old-k8s-version-542356                            | old-k8s-version-542356 | jenkins | v1.35.0 | 27 Jan 25 03:23 UTC | 27 Jan 25 03:23 UTC |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 03:17:58
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 03:17:58.007832  965412 out.go:345] Setting OutFile to fd 1 ...
	I0127 03:17:58.008087  965412 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 03:17:58.008098  965412 out.go:358] Setting ErrFile to fd 2...
	I0127 03:17:58.008102  965412 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 03:17:58.008278  965412 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-897624/.minikube/bin
	I0127 03:17:58.008983  965412 out.go:352] Setting JSON to false
	I0127 03:17:58.010228  965412 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":14421,"bootTime":1737933457,"procs":304,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 03:17:58.010344  965412 start.go:139] virtualization: kvm guest
	I0127 03:17:58.012718  965412 out.go:177] * [bridge-284111] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 03:17:58.014083  965412 notify.go:220] Checking for updates...
	I0127 03:17:58.014104  965412 out.go:177]   - MINIKUBE_LOCATION=20316
	I0127 03:17:58.015451  965412 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 03:17:58.016768  965412 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20316-897624/kubeconfig
	I0127 03:17:58.017965  965412 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-897624/.minikube
	I0127 03:17:58.019014  965412 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 03:17:58.020110  965412 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 03:17:58.021921  965412 config.go:182] Loaded profile config "default-k8s-diff-port-150897": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 03:17:58.022085  965412 config.go:182] Loaded profile config "no-preload-844432": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 03:17:58.022217  965412 config.go:182] Loaded profile config "old-k8s-version-542356": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 03:17:58.022360  965412 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 03:17:58.061018  965412 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 03:17:58.062340  965412 start.go:297] selected driver: kvm2
	I0127 03:17:58.062361  965412 start.go:901] validating driver "kvm2" against <nil>
	I0127 03:17:58.062373  965412 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 03:17:58.063151  965412 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 03:17:58.063269  965412 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20316-897624/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 03:17:58.080150  965412 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 03:17:58.080207  965412 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 03:17:58.080475  965412 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 03:17:58.080515  965412 cni.go:84] Creating CNI manager for "bridge"
	I0127 03:17:58.080523  965412 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 03:17:58.080596  965412 start.go:340] cluster config:
	{Name:bridge-284111 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-284111 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I0127 03:17:58.080703  965412 iso.go:125] acquiring lock: {Name:mkbbe08fa3daa2372045069667a0a52e0e34abd9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 03:17:58.082659  965412 out.go:177] * Starting "bridge-284111" primary control-plane node in "bridge-284111" cluster
	I0127 03:17:58.084060  965412 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 03:17:58.084155  965412 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 03:17:58.084193  965412 cache.go:56] Caching tarball of preloaded images
	I0127 03:17:58.084317  965412 preload.go:172] Found /home/jenkins/minikube-integration/20316-897624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 03:17:58.084333  965412 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 03:17:58.084446  965412 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/config.json ...
	I0127 03:17:58.084473  965412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/config.json: {Name:mk925500efef5bfd6040ea4d63f14dacaa6ac946 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:17:58.084633  965412 start.go:360] acquireMachinesLock for bridge-284111: {Name:mke71211974c740845fb9b00041df14f4e3ecd74 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 03:17:58.084676  965412 start.go:364] duration metric: took 26.584µs to acquireMachinesLock for "bridge-284111"
	I0127 03:17:58.084703  965412 start.go:93] Provisioning new machine with config: &{Name:bridge-284111 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-284111 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 03:17:58.084799  965412 start.go:125] createHost starting for "" (driver="kvm2")
	I0127 03:17:58.086526  965412 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0127 03:17:58.086710  965412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:17:58.086766  965412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:17:58.103582  965412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34573
	I0127 03:17:58.104096  965412 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:17:58.104674  965412 main.go:141] libmachine: Using API Version  1
	I0127 03:17:58.104697  965412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:17:58.105051  965412 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:17:58.105275  965412 main.go:141] libmachine: (bridge-284111) Calling .GetMachineName
	I0127 03:17:58.105440  965412 main.go:141] libmachine: (bridge-284111) Calling .DriverName
	I0127 03:17:58.105583  965412 start.go:159] libmachine.API.Create for "bridge-284111" (driver="kvm2")
	I0127 03:17:58.105618  965412 client.go:168] LocalClient.Create starting
	I0127 03:17:58.105657  965412 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem
	I0127 03:17:58.105689  965412 main.go:141] libmachine: Decoding PEM data...
	I0127 03:17:58.105706  965412 main.go:141] libmachine: Parsing certificate...
	I0127 03:17:58.105761  965412 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20316-897624/.minikube/certs/cert.pem
	I0127 03:17:58.105784  965412 main.go:141] libmachine: Decoding PEM data...
	I0127 03:17:58.105804  965412 main.go:141] libmachine: Parsing certificate...
	I0127 03:17:58.105828  965412 main.go:141] libmachine: Running pre-create checks...
	I0127 03:17:58.105836  965412 main.go:141] libmachine: (bridge-284111) Calling .PreCreateCheck
	I0127 03:17:58.106286  965412 main.go:141] libmachine: (bridge-284111) Calling .GetConfigRaw
	I0127 03:17:58.106758  965412 main.go:141] libmachine: Creating machine...
	I0127 03:17:58.106773  965412 main.go:141] libmachine: (bridge-284111) Calling .Create
	I0127 03:17:58.106921  965412 main.go:141] libmachine: (bridge-284111) creating KVM machine...
	I0127 03:17:58.106938  965412 main.go:141] libmachine: (bridge-284111) creating network...
	I0127 03:17:58.108340  965412 main.go:141] libmachine: (bridge-284111) DBG | found existing default KVM network
	I0127 03:17:58.109981  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:17:58.109804  965435 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:9e:80:59} reservation:<nil>}
	I0127 03:17:58.111324  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:17:58.111241  965435 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:27:c5:54} reservation:<nil>}
	I0127 03:17:58.112864  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:17:58.112772  965435 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000386960}
	I0127 03:17:58.112921  965412 main.go:141] libmachine: (bridge-284111) DBG | created network xml: 
	I0127 03:17:58.112965  965412 main.go:141] libmachine: (bridge-284111) DBG | <network>
	I0127 03:17:58.112982  965412 main.go:141] libmachine: (bridge-284111) DBG |   <name>mk-bridge-284111</name>
	I0127 03:17:58.112994  965412 main.go:141] libmachine: (bridge-284111) DBG |   <dns enable='no'/>
	I0127 03:17:58.113003  965412 main.go:141] libmachine: (bridge-284111) DBG |   
	I0127 03:17:58.113012  965412 main.go:141] libmachine: (bridge-284111) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0127 03:17:58.113026  965412 main.go:141] libmachine: (bridge-284111) DBG |     <dhcp>
	I0127 03:17:58.113039  965412 main.go:141] libmachine: (bridge-284111) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0127 03:17:58.113049  965412 main.go:141] libmachine: (bridge-284111) DBG |     </dhcp>
	I0127 03:17:58.113065  965412 main.go:141] libmachine: (bridge-284111) DBG |   </ip>
	I0127 03:17:58.113087  965412 main.go:141] libmachine: (bridge-284111) DBG |   
	I0127 03:17:58.113098  965412 main.go:141] libmachine: (bridge-284111) DBG | </network>
	I0127 03:17:58.113108  965412 main.go:141] libmachine: (bridge-284111) DBG | 
	I0127 03:17:58.118866  965412 main.go:141] libmachine: (bridge-284111) DBG | trying to create private KVM network mk-bridge-284111 192.168.61.0/24...
	I0127 03:17:58.193944  965412 main.go:141] libmachine: (bridge-284111) DBG | private KVM network mk-bridge-284111 192.168.61.0/24 created
	I0127 03:17:58.194004  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:17:58.193927  965435 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20316-897624/.minikube
	I0127 03:17:58.194017  965412 main.go:141] libmachine: (bridge-284111) setting up store path in /home/jenkins/minikube-integration/20316-897624/.minikube/machines/bridge-284111 ...
	I0127 03:17:58.194041  965412 main.go:141] libmachine: (bridge-284111) building disk image from file:///home/jenkins/minikube-integration/20316-897624/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 03:17:58.194060  965412 main.go:141] libmachine: (bridge-284111) Downloading /home/jenkins/minikube-integration/20316-897624/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20316-897624/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0127 03:17:58.491014  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:17:58.490850  965435 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/bridge-284111/id_rsa...
	I0127 03:17:58.742092  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:17:58.741934  965435 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/bridge-284111/bridge-284111.rawdisk...
	I0127 03:17:58.742129  965412 main.go:141] libmachine: (bridge-284111) DBG | Writing magic tar header
	I0127 03:17:58.742144  965412 main.go:141] libmachine: (bridge-284111) DBG | Writing SSH key tar header
	I0127 03:17:58.742157  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:17:58.742067  965435 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20316-897624/.minikube/machines/bridge-284111 ...
	I0127 03:17:58.742170  965412 main.go:141] libmachine: (bridge-284111) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/bridge-284111
	I0127 03:17:58.742179  965412 main.go:141] libmachine: (bridge-284111) setting executable bit set on /home/jenkins/minikube-integration/20316-897624/.minikube/machines/bridge-284111 (perms=drwx------)
	I0127 03:17:58.742193  965412 main.go:141] libmachine: (bridge-284111) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20316-897624/.minikube/machines
	I0127 03:17:58.742211  965412 main.go:141] libmachine: (bridge-284111) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20316-897624/.minikube
	I0127 03:17:58.742226  965412 main.go:141] libmachine: (bridge-284111) setting executable bit set on /home/jenkins/minikube-integration/20316-897624/.minikube/machines (perms=drwxr-xr-x)
	I0127 03:17:58.742240  965412 main.go:141] libmachine: (bridge-284111) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20316-897624
	I0127 03:17:58.742254  965412 main.go:141] libmachine: (bridge-284111) setting executable bit set on /home/jenkins/minikube-integration/20316-897624/.minikube (perms=drwxr-xr-x)
	I0127 03:17:58.742267  965412 main.go:141] libmachine: (bridge-284111) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0127 03:17:58.742281  965412 main.go:141] libmachine: (bridge-284111) DBG | checking permissions on dir: /home/jenkins
	I0127 03:17:58.742293  965412 main.go:141] libmachine: (bridge-284111) DBG | checking permissions on dir: /home
	I0127 03:17:58.742307  965412 main.go:141] libmachine: (bridge-284111) setting executable bit set on /home/jenkins/minikube-integration/20316-897624 (perms=drwxrwxr-x)
	I0127 03:17:58.742319  965412 main.go:141] libmachine: (bridge-284111) DBG | skipping /home - not owner
	I0127 03:17:58.742332  965412 main.go:141] libmachine: (bridge-284111) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0127 03:17:58.742346  965412 main.go:141] libmachine: (bridge-284111) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0127 03:17:58.742355  965412 main.go:141] libmachine: (bridge-284111) creating domain...
	I0127 03:17:58.743737  965412 main.go:141] libmachine: (bridge-284111) define libvirt domain using xml: 
	I0127 03:17:58.743768  965412 main.go:141] libmachine: (bridge-284111) <domain type='kvm'>
	I0127 03:17:58.743795  965412 main.go:141] libmachine: (bridge-284111)   <name>bridge-284111</name>
	I0127 03:17:58.743805  965412 main.go:141] libmachine: (bridge-284111)   <memory unit='MiB'>3072</memory>
	I0127 03:17:58.743811  965412 main.go:141] libmachine: (bridge-284111)   <vcpu>2</vcpu>
	I0127 03:17:58.743818  965412 main.go:141] libmachine: (bridge-284111)   <features>
	I0127 03:17:58.743824  965412 main.go:141] libmachine: (bridge-284111)     <acpi/>
	I0127 03:17:58.743831  965412 main.go:141] libmachine: (bridge-284111)     <apic/>
	I0127 03:17:58.743836  965412 main.go:141] libmachine: (bridge-284111)     <pae/>
	I0127 03:17:58.743843  965412 main.go:141] libmachine: (bridge-284111)     
	I0127 03:17:58.743860  965412 main.go:141] libmachine: (bridge-284111)   </features>
	I0127 03:17:58.743868  965412 main.go:141] libmachine: (bridge-284111)   <cpu mode='host-passthrough'>
	I0127 03:17:58.743872  965412 main.go:141] libmachine: (bridge-284111)   
	I0127 03:17:58.743877  965412 main.go:141] libmachine: (bridge-284111)   </cpu>
	I0127 03:17:58.743916  965412 main.go:141] libmachine: (bridge-284111)   <os>
	I0127 03:17:58.743943  965412 main.go:141] libmachine: (bridge-284111)     <type>hvm</type>
	I0127 03:17:58.743960  965412 main.go:141] libmachine: (bridge-284111)     <boot dev='cdrom'/>
	I0127 03:17:58.743978  965412 main.go:141] libmachine: (bridge-284111)     <boot dev='hd'/>
	I0127 03:17:58.743991  965412 main.go:141] libmachine: (bridge-284111)     <bootmenu enable='no'/>
	I0127 03:17:58.744000  965412 main.go:141] libmachine: (bridge-284111)   </os>
	I0127 03:17:58.744011  965412 main.go:141] libmachine: (bridge-284111)   <devices>
	I0127 03:17:58.744022  965412 main.go:141] libmachine: (bridge-284111)     <disk type='file' device='cdrom'>
	I0127 03:17:58.744037  965412 main.go:141] libmachine: (bridge-284111)       <source file='/home/jenkins/minikube-integration/20316-897624/.minikube/machines/bridge-284111/boot2docker.iso'/>
	I0127 03:17:58.744049  965412 main.go:141] libmachine: (bridge-284111)       <target dev='hdc' bus='scsi'/>
	I0127 03:17:58.744056  965412 main.go:141] libmachine: (bridge-284111)       <readonly/>
	I0127 03:17:58.744068  965412 main.go:141] libmachine: (bridge-284111)     </disk>
	I0127 03:17:58.744079  965412 main.go:141] libmachine: (bridge-284111)     <disk type='file' device='disk'>
	I0127 03:17:58.744092  965412 main.go:141] libmachine: (bridge-284111)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0127 03:17:58.744106  965412 main.go:141] libmachine: (bridge-284111)       <source file='/home/jenkins/minikube-integration/20316-897624/.minikube/machines/bridge-284111/bridge-284111.rawdisk'/>
	I0127 03:17:58.744119  965412 main.go:141] libmachine: (bridge-284111)       <target dev='hda' bus='virtio'/>
	I0127 03:17:58.744129  965412 main.go:141] libmachine: (bridge-284111)     </disk>
	I0127 03:17:58.744147  965412 main.go:141] libmachine: (bridge-284111)     <interface type='network'>
	I0127 03:17:58.744166  965412 main.go:141] libmachine: (bridge-284111)       <source network='mk-bridge-284111'/>
	I0127 03:17:58.744177  965412 main.go:141] libmachine: (bridge-284111)       <model type='virtio'/>
	I0127 03:17:58.744181  965412 main.go:141] libmachine: (bridge-284111)     </interface>
	I0127 03:17:58.744188  965412 main.go:141] libmachine: (bridge-284111)     <interface type='network'>
	I0127 03:17:58.744199  965412 main.go:141] libmachine: (bridge-284111)       <source network='default'/>
	I0127 03:17:58.744209  965412 main.go:141] libmachine: (bridge-284111)       <model type='virtio'/>
	I0127 03:17:58.744220  965412 main.go:141] libmachine: (bridge-284111)     </interface>
	I0127 03:17:58.744237  965412 main.go:141] libmachine: (bridge-284111)     <serial type='pty'>
	I0127 03:17:58.744254  965412 main.go:141] libmachine: (bridge-284111)       <target port='0'/>
	I0127 03:17:58.744267  965412 main.go:141] libmachine: (bridge-284111)     </serial>
	I0127 03:17:58.744277  965412 main.go:141] libmachine: (bridge-284111)     <console type='pty'>
	I0127 03:17:58.744286  965412 main.go:141] libmachine: (bridge-284111)       <target type='serial' port='0'/>
	I0127 03:17:58.744295  965412 main.go:141] libmachine: (bridge-284111)     </console>
	I0127 03:17:58.744304  965412 main.go:141] libmachine: (bridge-284111)     <rng model='virtio'>
	I0127 03:17:58.744320  965412 main.go:141] libmachine: (bridge-284111)       <backend model='random'>/dev/random</backend>
	I0127 03:17:58.744330  965412 main.go:141] libmachine: (bridge-284111)     </rng>
	I0127 03:17:58.744339  965412 main.go:141] libmachine: (bridge-284111)     
	I0127 03:17:58.744352  965412 main.go:141] libmachine: (bridge-284111)     
	I0127 03:17:58.744383  965412 main.go:141] libmachine: (bridge-284111)   </devices>
	I0127 03:17:58.744399  965412 main.go:141] libmachine: (bridge-284111) </domain>
	I0127 03:17:58.744433  965412 main.go:141] libmachine: (bridge-284111) 
	I0127 03:17:58.748565  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b5:a5:4c in network default
	I0127 03:17:58.749275  965412 main.go:141] libmachine: (bridge-284111) starting domain...
	I0127 03:17:58.749295  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:17:58.749303  965412 main.go:141] libmachine: (bridge-284111) ensuring networks are active...
	I0127 03:17:58.750055  965412 main.go:141] libmachine: (bridge-284111) Ensuring network default is active
	I0127 03:17:58.750412  965412 main.go:141] libmachine: (bridge-284111) Ensuring network mk-bridge-284111 is active
	I0127 03:17:58.750915  965412 main.go:141] libmachine: (bridge-284111) getting domain XML...
	I0127 03:17:58.751662  965412 main.go:141] libmachine: (bridge-284111) creating domain...
	I0127 03:18:00.015025  965412 main.go:141] libmachine: (bridge-284111) waiting for IP...
	I0127 03:18:00.016519  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:00.017082  965412 main.go:141] libmachine: (bridge-284111) DBG | unable to find current IP address of domain bridge-284111 in network mk-bridge-284111
	I0127 03:18:00.017146  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:18:00.017069  965435 retry.go:31] will retry after 296.46937ms: waiting for domain to come up
	I0127 03:18:00.315605  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:00.316275  965412 main.go:141] libmachine: (bridge-284111) DBG | unable to find current IP address of domain bridge-284111 in network mk-bridge-284111
	I0127 03:18:00.316335  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:18:00.316255  965435 retry.go:31] will retry after 324.587633ms: waiting for domain to come up
	I0127 03:18:00.642896  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:00.643504  965412 main.go:141] libmachine: (bridge-284111) DBG | unable to find current IP address of domain bridge-284111 in network mk-bridge-284111
	I0127 03:18:00.643533  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:18:00.643463  965435 retry.go:31] will retry after 310.207491ms: waiting for domain to come up
	I0127 03:18:00.955258  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:00.955855  965412 main.go:141] libmachine: (bridge-284111) DBG | unable to find current IP address of domain bridge-284111 in network mk-bridge-284111
	I0127 03:18:00.955900  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:18:00.955817  965435 retry.go:31] will retry after 446.485588ms: waiting for domain to come up
	I0127 03:18:01.403690  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:01.404190  965412 main.go:141] libmachine: (bridge-284111) DBG | unable to find current IP address of domain bridge-284111 in network mk-bridge-284111
	I0127 03:18:01.404213  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:18:01.404170  965435 retry.go:31] will retry after 582.778524ms: waiting for domain to come up
	I0127 03:18:01.988986  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:01.989525  965412 main.go:141] libmachine: (bridge-284111) DBG | unable to find current IP address of domain bridge-284111 in network mk-bridge-284111
	I0127 03:18:01.989575  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:18:01.989493  965435 retry.go:31] will retry after 794.193078ms: waiting for domain to come up
	I0127 03:18:02.784888  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:02.785367  965412 main.go:141] libmachine: (bridge-284111) DBG | unable to find current IP address of domain bridge-284111 in network mk-bridge-284111
	I0127 03:18:02.785398  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:18:02.785331  965435 retry.go:31] will retry after 750.185481ms: waiting for domain to come up
	I0127 03:18:03.536841  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:03.537466  965412 main.go:141] libmachine: (bridge-284111) DBG | unable to find current IP address of domain bridge-284111 in network mk-bridge-284111
	I0127 03:18:03.537489  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:18:03.537438  965435 retry.go:31] will retry after 1.167158008s: waiting for domain to come up
	I0127 03:18:04.706731  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:04.707283  965412 main.go:141] libmachine: (bridge-284111) DBG | unable to find current IP address of domain bridge-284111 in network mk-bridge-284111
	I0127 03:18:04.707309  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:18:04.707258  965435 retry.go:31] will retry after 1.775191002s: waiting for domain to come up
	I0127 03:18:06.485130  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:06.485646  965412 main.go:141] libmachine: (bridge-284111) DBG | unable to find current IP address of domain bridge-284111 in network mk-bridge-284111
	I0127 03:18:06.485667  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:18:06.485615  965435 retry.go:31] will retry after 1.448139158s: waiting for domain to come up
	I0127 03:18:07.935272  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:07.935916  965412 main.go:141] libmachine: (bridge-284111) DBG | unable to find current IP address of domain bridge-284111 in network mk-bridge-284111
	I0127 03:18:07.935951  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:18:07.935874  965435 retry.go:31] will retry after 1.937800559s: waiting for domain to come up
	I0127 03:18:09.876527  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:09.877179  965412 main.go:141] libmachine: (bridge-284111) DBG | unable to find current IP address of domain bridge-284111 in network mk-bridge-284111
	I0127 03:18:09.877209  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:18:09.877127  965435 retry.go:31] will retry after 3.510411188s: waiting for domain to come up
	I0127 03:18:13.388796  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:13.389263  965412 main.go:141] libmachine: (bridge-284111) DBG | unable to find current IP address of domain bridge-284111 in network mk-bridge-284111
	I0127 03:18:13.389312  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:18:13.389227  965435 retry.go:31] will retry after 2.812768495s: waiting for domain to come up
	I0127 03:18:16.203115  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:16.203663  965412 main.go:141] libmachine: (bridge-284111) DBG | unable to find current IP address of domain bridge-284111 in network mk-bridge-284111
	I0127 03:18:16.203687  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:18:16.203637  965435 retry.go:31] will retry after 5.220368337s: waiting for domain to come up
	I0127 03:18:21.428631  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:21.429297  965412 main.go:141] libmachine: (bridge-284111) found domain IP: 192.168.61.178
	I0127 03:18:21.429319  965412 main.go:141] libmachine: (bridge-284111) reserving static IP address...
	I0127 03:18:21.429334  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has current primary IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:21.429752  965412 main.go:141] libmachine: (bridge-284111) DBG | unable to find host DHCP lease matching {name: "bridge-284111", mac: "52:54:00:b1:5c:91", ip: "192.168.61.178"} in network mk-bridge-284111
	I0127 03:18:21.509966  965412 main.go:141] libmachine: (bridge-284111) reserved static IP address 192.168.61.178 for domain bridge-284111
	I0127 03:18:21.509994  965412 main.go:141] libmachine: (bridge-284111) waiting for SSH...
	I0127 03:18:21.510014  965412 main.go:141] libmachine: (bridge-284111) DBG | Getting to WaitForSSH function...
	I0127 03:18:21.512978  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:21.513493  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:21.513526  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:21.513707  965412 main.go:141] libmachine: (bridge-284111) DBG | Using SSH client type: external
	I0127 03:18:21.513738  965412 main.go:141] libmachine: (bridge-284111) DBG | Using SSH private key: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/bridge-284111/id_rsa (-rw-------)
	I0127 03:18:21.513787  965412 main.go:141] libmachine: (bridge-284111) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.178 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20316-897624/.minikube/machines/bridge-284111/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 03:18:21.513808  965412 main.go:141] libmachine: (bridge-284111) DBG | About to run SSH command:
	I0127 03:18:21.513827  965412 main.go:141] libmachine: (bridge-284111) DBG | exit 0
	I0127 03:18:21.644785  965412 main.go:141] libmachine: (bridge-284111) DBG | SSH cmd err, output: <nil>: 
	I0127 03:18:21.645052  965412 main.go:141] libmachine: (bridge-284111) KVM machine creation complete
	I0127 03:18:21.645355  965412 main.go:141] libmachine: (bridge-284111) Calling .GetConfigRaw
	I0127 03:18:21.645965  965412 main.go:141] libmachine: (bridge-284111) Calling .DriverName
	I0127 03:18:21.646190  965412 main.go:141] libmachine: (bridge-284111) Calling .DriverName
	I0127 03:18:21.646360  965412 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0127 03:18:21.646375  965412 main.go:141] libmachine: (bridge-284111) Calling .GetState
	I0127 03:18:21.647746  965412 main.go:141] libmachine: Detecting operating system of created instance...
	I0127 03:18:21.647759  965412 main.go:141] libmachine: Waiting for SSH to be available...
	I0127 03:18:21.647764  965412 main.go:141] libmachine: Getting to WaitForSSH function...
	I0127 03:18:21.647770  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHHostname
	I0127 03:18:21.650013  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:21.650350  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:21.650389  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:21.650556  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHPort
	I0127 03:18:21.650778  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:21.650971  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:21.651160  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHUsername
	I0127 03:18:21.651399  965412 main.go:141] libmachine: Using SSH client type: native
	I0127 03:18:21.651690  965412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0127 03:18:21.651705  965412 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0127 03:18:21.764222  965412 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 03:18:21.764246  965412 main.go:141] libmachine: Detecting the provisioner...
	I0127 03:18:21.764254  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHHostname
	I0127 03:18:21.767309  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:21.767688  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:21.767729  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:21.767918  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHPort
	I0127 03:18:21.768152  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:21.768332  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:21.768482  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHUsername
	I0127 03:18:21.768638  965412 main.go:141] libmachine: Using SSH client type: native
	I0127 03:18:21.768838  965412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0127 03:18:21.768853  965412 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0127 03:18:21.881643  965412 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0127 03:18:21.881735  965412 main.go:141] libmachine: found compatible host: buildroot
	I0127 03:18:21.881746  965412 main.go:141] libmachine: Provisioning with buildroot...
	I0127 03:18:21.881753  965412 main.go:141] libmachine: (bridge-284111) Calling .GetMachineName
	I0127 03:18:21.881975  965412 buildroot.go:166] provisioning hostname "bridge-284111"
	I0127 03:18:21.881988  965412 main.go:141] libmachine: (bridge-284111) Calling .GetMachineName
	I0127 03:18:21.882114  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHHostname
	I0127 03:18:21.885113  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:21.885480  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:21.885512  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:21.885630  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHPort
	I0127 03:18:21.885871  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:21.886021  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:21.886238  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHUsername
	I0127 03:18:21.886376  965412 main.go:141] libmachine: Using SSH client type: native
	I0127 03:18:21.886540  965412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0127 03:18:21.886551  965412 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-284111 && echo "bridge-284111" | sudo tee /etc/hostname
	I0127 03:18:22.015776  965412 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-284111
	
	I0127 03:18:22.015808  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHHostname
	I0127 03:18:22.018986  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.019331  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:22.019361  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.019548  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHPort
	I0127 03:18:22.019766  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:22.019970  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:22.020119  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHUsername
	I0127 03:18:22.020270  965412 main.go:141] libmachine: Using SSH client type: native
	I0127 03:18:22.020473  965412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0127 03:18:22.020500  965412 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-284111' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-284111/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-284111' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 03:18:22.149637  965412 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 03:18:22.149671  965412 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20316-897624/.minikube CaCertPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20316-897624/.minikube}
	I0127 03:18:22.149726  965412 buildroot.go:174] setting up certificates
	I0127 03:18:22.149746  965412 provision.go:84] configureAuth start
	I0127 03:18:22.149765  965412 main.go:141] libmachine: (bridge-284111) Calling .GetMachineName
	I0127 03:18:22.150087  965412 main.go:141] libmachine: (bridge-284111) Calling .GetIP
	I0127 03:18:22.153181  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.153482  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:22.153504  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.153707  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHHostname
	I0127 03:18:22.156418  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.156825  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:22.156858  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.157060  965412 provision.go:143] copyHostCerts
	I0127 03:18:22.157140  965412 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-897624/.minikube/ca.pem, removing ...
	I0127 03:18:22.157153  965412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-897624/.minikube/ca.pem
	I0127 03:18:22.157243  965412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20316-897624/.minikube/ca.pem (1078 bytes)
	I0127 03:18:22.157355  965412 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-897624/.minikube/cert.pem, removing ...
	I0127 03:18:22.157366  965412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-897624/.minikube/cert.pem
	I0127 03:18:22.157404  965412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20316-897624/.minikube/cert.pem (1123 bytes)
	I0127 03:18:22.157496  965412 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-897624/.minikube/key.pem, removing ...
	I0127 03:18:22.157506  965412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-897624/.minikube/key.pem
	I0127 03:18:22.157546  965412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20316-897624/.minikube/key.pem (1679 bytes)
	I0127 03:18:22.157616  965412 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca-key.pem org=jenkins.bridge-284111 san=[127.0.0.1 192.168.61.178 bridge-284111 localhost minikube]
	I0127 03:18:22.340623  965412 provision.go:177] copyRemoteCerts
	I0127 03:18:22.340707  965412 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 03:18:22.340739  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHHostname
	I0127 03:18:22.343784  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.344187  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:22.344219  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.344432  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHPort
	I0127 03:18:22.344616  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:22.344750  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHUsername
	I0127 03:18:22.344872  965412 sshutil.go:53] new ssh client: &{IP:192.168.61.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/bridge-284111/id_rsa Username:docker}
	I0127 03:18:22.435531  965412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 03:18:22.459380  965412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 03:18:22.481955  965412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0127 03:18:22.504297  965412 provision.go:87] duration metric: took 354.53072ms to configureAuth
	I0127 03:18:22.504340  965412 buildroot.go:189] setting minikube options for container-runtime
	I0127 03:18:22.504542  965412 config.go:182] Loaded profile config "bridge-284111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 03:18:22.504637  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHHostname
	I0127 03:18:22.507527  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.507981  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:22.508014  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.508272  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHPort
	I0127 03:18:22.508518  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:22.508696  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:22.508867  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHUsername
	I0127 03:18:22.509083  965412 main.go:141] libmachine: Using SSH client type: native
	I0127 03:18:22.509321  965412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0127 03:18:22.509344  965412 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 03:18:22.745255  965412 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 03:18:22.745289  965412 main.go:141] libmachine: Checking connection to Docker...
	I0127 03:18:22.745298  965412 main.go:141] libmachine: (bridge-284111) Calling .GetURL
	I0127 03:18:22.746733  965412 main.go:141] libmachine: (bridge-284111) DBG | using libvirt version 6000000
	I0127 03:18:22.748816  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.749210  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:22.749235  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.749452  965412 main.go:141] libmachine: Docker is up and running!
	I0127 03:18:22.749468  965412 main.go:141] libmachine: Reticulating splines...
	I0127 03:18:22.749477  965412 client.go:171] duration metric: took 24.643847103s to LocalClient.Create
	I0127 03:18:22.749501  965412 start.go:167] duration metric: took 24.643920715s to libmachine.API.Create "bridge-284111"
	I0127 03:18:22.749510  965412 start.go:293] postStartSetup for "bridge-284111" (driver="kvm2")
	I0127 03:18:22.749521  965412 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 03:18:22.749538  965412 main.go:141] libmachine: (bridge-284111) Calling .DriverName
	I0127 03:18:22.749766  965412 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 03:18:22.749791  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHHostname
	I0127 03:18:22.752050  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.752455  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:22.752481  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.752670  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHPort
	I0127 03:18:22.752875  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:22.753046  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHUsername
	I0127 03:18:22.753209  965412 sshutil.go:53] new ssh client: &{IP:192.168.61.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/bridge-284111/id_rsa Username:docker}
	I0127 03:18:22.838649  965412 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 03:18:22.842594  965412 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 03:18:22.842623  965412 filesync.go:126] Scanning /home/jenkins/minikube-integration/20316-897624/.minikube/addons for local assets ...
	I0127 03:18:22.842702  965412 filesync.go:126] Scanning /home/jenkins/minikube-integration/20316-897624/.minikube/files for local assets ...
	I0127 03:18:22.842811  965412 filesync.go:149] local asset: /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem -> 9048892.pem in /etc/ssl/certs
	I0127 03:18:22.842925  965412 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 03:18:22.851615  965412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem --> /etc/ssl/certs/9048892.pem (1708 bytes)
	I0127 03:18:22.873576  965412 start.go:296] duration metric: took 124.051614ms for postStartSetup
	I0127 03:18:22.873628  965412 main.go:141] libmachine: (bridge-284111) Calling .GetConfigRaw
	I0127 03:18:22.874263  965412 main.go:141] libmachine: (bridge-284111) Calling .GetIP
	I0127 03:18:22.877366  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.877690  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:22.877717  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.877984  965412 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/config.json ...
	I0127 03:18:22.878205  965412 start.go:128] duration metric: took 24.793394051s to createHost
	I0127 03:18:22.878230  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHHostname
	I0127 03:18:22.880656  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.881029  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:22.881057  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.881273  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHPort
	I0127 03:18:22.881451  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:22.881617  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:22.881735  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHUsername
	I0127 03:18:22.881878  965412 main.go:141] libmachine: Using SSH client type: native
	I0127 03:18:22.882070  965412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0127 03:18:22.882081  965412 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 03:18:22.993428  965412 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737947902.961069921
	
	I0127 03:18:22.993452  965412 fix.go:216] guest clock: 1737947902.961069921
	I0127 03:18:22.993459  965412 fix.go:229] Guest: 2025-01-27 03:18:22.961069921 +0000 UTC Remote: 2025-01-27 03:18:22.878219801 +0000 UTC m=+24.911173814 (delta=82.85012ms)
	I0127 03:18:22.993480  965412 fix.go:200] guest clock delta is within tolerance: 82.85012ms
	I0127 03:18:22.993486  965412 start.go:83] releasing machines lock for "bridge-284111", held for 24.908799324s
	I0127 03:18:22.993504  965412 main.go:141] libmachine: (bridge-284111) Calling .DriverName
	I0127 03:18:22.993771  965412 main.go:141] libmachine: (bridge-284111) Calling .GetIP
	I0127 03:18:22.996377  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.996721  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:22.996743  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.996876  965412 main.go:141] libmachine: (bridge-284111) Calling .DriverName
	I0127 03:18:22.997362  965412 main.go:141] libmachine: (bridge-284111) Calling .DriverName
	I0127 03:18:22.997554  965412 main.go:141] libmachine: (bridge-284111) Calling .DriverName
	I0127 03:18:22.997692  965412 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 03:18:22.997726  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHHostname
	I0127 03:18:22.997831  965412 ssh_runner.go:195] Run: cat /version.json
	I0127 03:18:22.997879  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHHostname
	I0127 03:18:23.000390  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:23.000715  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:23.000748  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:23.000765  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:23.000835  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHPort
	I0127 03:18:23.001133  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:23.001212  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:23.001255  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:23.001296  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHUsername
	I0127 03:18:23.001383  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHPort
	I0127 03:18:23.001468  965412 sshutil.go:53] new ssh client: &{IP:192.168.61.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/bridge-284111/id_rsa Username:docker}
	I0127 03:18:23.001516  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:23.001641  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHUsername
	I0127 03:18:23.001749  965412 sshutil.go:53] new ssh client: &{IP:192.168.61.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/bridge-284111/id_rsa Username:docker}
	I0127 03:18:23.082154  965412 ssh_runner.go:195] Run: systemctl --version
	I0127 03:18:23.117345  965412 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 03:18:23.273868  965412 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 03:18:23.280724  965412 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 03:18:23.280787  965412 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 03:18:23.296482  965412 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 03:18:23.296511  965412 start.go:495] detecting cgroup driver to use...
	I0127 03:18:23.296594  965412 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 03:18:23.311864  965412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 03:18:23.326213  965412 docker.go:217] disabling cri-docker service (if available) ...
	I0127 03:18:23.326279  965412 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 03:18:23.340218  965412 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 03:18:23.354322  965412 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 03:18:23.476775  965412 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 03:18:23.639888  965412 docker.go:233] disabling docker service ...
	I0127 03:18:23.639952  965412 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 03:18:23.654213  965412 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 03:18:23.666393  965412 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 03:18:23.791691  965412 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 03:18:23.913216  965412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 03:18:23.928195  965412 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 03:18:23.946645  965412 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 03:18:23.946719  965412 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:18:23.956606  965412 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 03:18:23.956669  965412 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:18:23.966456  965412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:18:23.975900  965412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:18:23.985665  965412 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 03:18:23.996373  965412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:18:24.005997  965412 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:18:24.022695  965412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:18:24.032296  965412 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 03:18:24.041565  965412 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 03:18:24.041627  965412 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 03:18:24.054330  965412 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 03:18:24.064064  965412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 03:18:24.182330  965412 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 03:18:24.274584  965412 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 03:18:24.274671  965412 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 03:18:24.279679  965412 start.go:563] Will wait 60s for crictl version
	I0127 03:18:24.279736  965412 ssh_runner.go:195] Run: which crictl
	I0127 03:18:24.283480  965412 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 03:18:24.325459  965412 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 03:18:24.325556  965412 ssh_runner.go:195] Run: crio --version
	I0127 03:18:24.358736  965412 ssh_runner.go:195] Run: crio --version
	I0127 03:18:24.389379  965412 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 03:18:24.390675  965412 main.go:141] libmachine: (bridge-284111) Calling .GetIP
	I0127 03:18:24.393731  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:24.394168  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:24.394201  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:24.394421  965412 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0127 03:18:24.398415  965412 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 03:18:24.413708  965412 kubeadm.go:883] updating cluster {Name:bridge-284111 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-284111 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.61.178 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 03:18:24.413840  965412 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 03:18:24.413899  965412 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 03:18:24.444435  965412 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0127 03:18:24.444515  965412 ssh_runner.go:195] Run: which lz4
	I0127 03:18:24.448257  965412 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 03:18:24.451999  965412 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 03:18:24.452038  965412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0127 03:18:25.746010  965412 crio.go:462] duration metric: took 1.297780518s to copy over tarball
	I0127 03:18:25.746099  965412 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 03:18:28.004354  965412 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.258210919s)
	I0127 03:18:28.004393  965412 crio.go:469] duration metric: took 2.258349498s to extract the tarball
	I0127 03:18:28.004404  965412 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 03:18:28.043277  965412 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 03:18:28.083196  965412 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 03:18:28.083221  965412 cache_images.go:84] Images are preloaded, skipping loading
	I0127 03:18:28.083229  965412 kubeadm.go:934] updating node { 192.168.61.178 8443 v1.32.1 crio true true} ...
	I0127 03:18:28.083347  965412 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-284111 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.178
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:bridge-284111 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0127 03:18:28.083435  965412 ssh_runner.go:195] Run: crio config
	I0127 03:18:28.136532  965412 cni.go:84] Creating CNI manager for "bridge"
	I0127 03:18:28.136559  965412 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 03:18:28.136582  965412 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.178 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-284111 NodeName:bridge-284111 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.178"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.178 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 03:18:28.136722  965412 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.178
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-284111"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.178"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.178"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 03:18:28.136785  965412 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 03:18:28.148059  965412 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 03:18:28.148148  965412 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 03:18:28.159212  965412 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0127 03:18:28.177174  965412 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 03:18:28.194607  965412 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I0127 03:18:28.212099  965412 ssh_runner.go:195] Run: grep 192.168.61.178	control-plane.minikube.internal$ /etc/hosts
	I0127 03:18:28.216059  965412 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.178	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 03:18:28.229417  965412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 03:18:28.371410  965412 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 03:18:28.389537  965412 certs.go:68] Setting up /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111 for IP: 192.168.61.178
	I0127 03:18:28.389563  965412 certs.go:194] generating shared ca certs ...
	I0127 03:18:28.389583  965412 certs.go:226] acquiring lock for ca certs: {Name:mk2a0a86c4d196cb3cdcd1c836209954479044e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:18:28.389758  965412 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20316-897624/.minikube/ca.key
	I0127 03:18:28.389807  965412 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20316-897624/.minikube/proxy-client-ca.key
	I0127 03:18:28.389843  965412 certs.go:256] generating profile certs ...
	I0127 03:18:28.389921  965412 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/client.key
	I0127 03:18:28.389966  965412 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/client.crt with IP's: []
	I0127 03:18:28.445000  965412 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/client.crt ...
	I0127 03:18:28.445033  965412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/client.crt: {Name:mk9e7d9c51cfe9365fde4974dd819fc8a0bc2c44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:18:28.445242  965412 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/client.key ...
	I0127 03:18:28.445257  965412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/client.key: {Name:mk894eba5407f86f4d0ac29f6591849b258437b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:18:28.445372  965412 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/apiserver.key.803117fd
	I0127 03:18:28.445393  965412 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/apiserver.crt.803117fd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.178]
	I0127 03:18:28.526577  965412 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/apiserver.crt.803117fd ...
	I0127 03:18:28.526609  965412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/apiserver.crt.803117fd: {Name:mk6aec7505a30c2d0a25e9e0af381fa28e034b4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:18:28.527301  965412 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/apiserver.key.803117fd ...
	I0127 03:18:28.527321  965412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/apiserver.key.803117fd: {Name:mka5254c805742e5a010001442cf41b9cd6eb55d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:18:28.527419  965412 certs.go:381] copying /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/apiserver.crt.803117fd -> /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/apiserver.crt
	I0127 03:18:28.527506  965412 certs.go:385] copying /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/apiserver.key.803117fd -> /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/apiserver.key
	I0127 03:18:28.527579  965412 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/proxy-client.key
	I0127 03:18:28.527604  965412 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/proxy-client.crt with IP's: []
	I0127 03:18:28.748033  965412 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/proxy-client.crt ...
	I0127 03:18:28.748067  965412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/proxy-client.crt: {Name:mk5216cbd26d0be2d45e0038f200d35e4ccd2e0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:18:28.748266  965412 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/proxy-client.key ...
	I0127 03:18:28.748285  965412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/proxy-client.key: {Name:mk834e366bff2ac05f8e145b0ed8884b9ec0040a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:18:28.748490  965412 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/904889.pem (1338 bytes)
	W0127 03:18:28.748541  965412 certs.go:480] ignoring /home/jenkins/minikube-integration/20316-897624/.minikube/certs/904889_empty.pem, impossibly tiny 0 bytes
	I0127 03:18:28.748557  965412 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 03:18:28.748588  965412 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem (1078 bytes)
	I0127 03:18:28.748617  965412 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/cert.pem (1123 bytes)
	I0127 03:18:28.748649  965412 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/key.pem (1679 bytes)
	I0127 03:18:28.748699  965412 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem (1708 bytes)
	I0127 03:18:28.749391  965412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 03:18:28.774598  965412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 03:18:28.797221  965412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 03:18:28.819775  965412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 03:18:28.844206  965412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0127 03:18:28.868818  965412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 03:18:28.893782  965412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 03:18:28.918276  965412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 03:18:28.942153  965412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 03:18:28.964770  965412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/certs/904889.pem --> /usr/share/ca-certificates/904889.pem (1338 bytes)
	I0127 03:18:28.987187  965412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem --> /usr/share/ca-certificates/9048892.pem (1708 bytes)
	I0127 03:18:29.011066  965412 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 03:18:29.027191  965412 ssh_runner.go:195] Run: openssl version
	I0127 03:18:29.033146  965412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 03:18:29.044813  965412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 03:18:29.049334  965412 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 01:48 /usr/share/ca-certificates/minikubeCA.pem
	I0127 03:18:29.049405  965412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 03:18:29.055257  965412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 03:18:29.068772  965412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/904889.pem && ln -fs /usr/share/ca-certificates/904889.pem /etc/ssl/certs/904889.pem"
	I0127 03:18:29.083121  965412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/904889.pem
	I0127 03:18:29.087778  965412 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 01:57 /usr/share/ca-certificates/904889.pem
	I0127 03:18:29.087846  965412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/904889.pem
	I0127 03:18:29.095607  965412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/904889.pem /etc/ssl/certs/51391683.0"
	I0127 03:18:29.108404  965412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9048892.pem && ln -fs /usr/share/ca-certificates/9048892.pem /etc/ssl/certs/9048892.pem"
	I0127 03:18:29.123881  965412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9048892.pem
	I0127 03:18:29.130048  965412 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 01:57 /usr/share/ca-certificates/9048892.pem
	I0127 03:18:29.130122  965412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9048892.pem
	I0127 03:18:29.135495  965412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9048892.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 03:18:29.146435  965412 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 03:18:29.150627  965412 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 03:18:29.150696  965412 kubeadm.go:392] StartCluster: {Name:bridge-284111 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-284111 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.61.178 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 03:18:29.150795  965412 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 03:18:29.150878  965412 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 03:18:29.193528  965412 cri.go:89] found id: ""
	I0127 03:18:29.193616  965412 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 03:18:29.203514  965412 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 03:18:29.213077  965412 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 03:18:29.225040  965412 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 03:18:29.225067  965412 kubeadm.go:157] found existing configuration files:
	
	I0127 03:18:29.225118  965412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 03:18:29.234175  965412 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 03:18:29.234234  965412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 03:18:29.243247  965412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 03:18:29.252478  965412 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 03:18:29.252533  965412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 03:18:29.262187  965412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 03:18:29.271490  965412 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 03:18:29.271550  965412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 03:18:29.281421  965412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 03:18:29.289870  965412 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 03:18:29.289944  965412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 03:18:29.298976  965412 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 03:18:29.453263  965412 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 03:18:39.039753  965412 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 03:18:39.039835  965412 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 03:18:39.039931  965412 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 03:18:39.040064  965412 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 03:18:39.040201  965412 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 03:18:39.040292  965412 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 03:18:39.041906  965412 out.go:235]   - Generating certificates and keys ...
	I0127 03:18:39.042004  965412 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 03:18:39.042097  965412 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 03:18:39.042190  965412 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 03:18:39.042251  965412 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0127 03:18:39.042319  965412 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0127 03:18:39.042370  965412 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0127 03:18:39.042423  965412 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0127 03:18:39.042563  965412 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-284111 localhost] and IPs [192.168.61.178 127.0.0.1 ::1]
	I0127 03:18:39.042626  965412 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0127 03:18:39.042798  965412 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-284111 localhost] and IPs [192.168.61.178 127.0.0.1 ::1]
	I0127 03:18:39.042911  965412 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 03:18:39.043006  965412 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 03:18:39.043074  965412 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0127 03:18:39.043158  965412 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 03:18:39.043267  965412 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 03:18:39.043359  965412 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 03:18:39.043439  965412 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 03:18:39.043526  965412 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 03:18:39.043598  965412 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 03:18:39.043710  965412 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 03:18:39.043807  965412 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 03:18:39.045144  965412 out.go:235]   - Booting up control plane ...
	I0127 03:18:39.045244  965412 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 03:18:39.045327  965412 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 03:18:39.045407  965412 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 03:18:39.045550  965412 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 03:18:39.045646  965412 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 03:18:39.045707  965412 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 03:18:39.045807  965412 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 03:18:39.045898  965412 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 03:18:39.045994  965412 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 503.82396ms
	I0127 03:18:39.046096  965412 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 03:18:39.046186  965412 kubeadm.go:310] [api-check] The API server is healthy after 5.003089327s
	I0127 03:18:39.046295  965412 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 03:18:39.046472  965412 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 03:18:39.046560  965412 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 03:18:39.046735  965412 kubeadm.go:310] [mark-control-plane] Marking the node bridge-284111 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 03:18:39.046819  965412 kubeadm.go:310] [bootstrap-token] Using token: 9vz6c7.t2ey9xa65s2m5rce
	I0127 03:18:39.048225  965412 out.go:235]   - Configuring RBAC rules ...
	I0127 03:18:39.048342  965412 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 03:18:39.048430  965412 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 03:18:39.048558  965412 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 03:18:39.048663  965412 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 03:18:39.048758  965412 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 03:18:39.048829  965412 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 03:18:39.048972  965412 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 03:18:39.049013  965412 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 03:18:39.049058  965412 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 03:18:39.049064  965412 kubeadm.go:310] 
	I0127 03:18:39.049117  965412 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 03:18:39.049123  965412 kubeadm.go:310] 
	I0127 03:18:39.049204  965412 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 03:18:39.049211  965412 kubeadm.go:310] 
	I0127 03:18:39.049232  965412 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 03:18:39.049289  965412 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 03:18:39.049374  965412 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 03:18:39.049387  965412 kubeadm.go:310] 
	I0127 03:18:39.049462  965412 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 03:18:39.049472  965412 kubeadm.go:310] 
	I0127 03:18:39.049547  965412 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 03:18:39.049555  965412 kubeadm.go:310] 
	I0127 03:18:39.049628  965412 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 03:18:39.049755  965412 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 03:18:39.049867  965412 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 03:18:39.049877  965412 kubeadm.go:310] 
	I0127 03:18:39.049992  965412 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 03:18:39.050101  965412 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 03:18:39.050111  965412 kubeadm.go:310] 
	I0127 03:18:39.050182  965412 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 9vz6c7.t2ey9xa65s2m5rce \
	I0127 03:18:39.050284  965412 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9e03cefec8c3986c8071c28cf6fb7c7bbd45830c579dcdbae8e6ec1676091320 \
	I0127 03:18:39.050318  965412 kubeadm.go:310] 	--control-plane 
	I0127 03:18:39.050325  965412 kubeadm.go:310] 
	I0127 03:18:39.050393  965412 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 03:18:39.050399  965412 kubeadm.go:310] 
	I0127 03:18:39.050483  965412 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9vz6c7.t2ey9xa65s2m5rce \
	I0127 03:18:39.050641  965412 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9e03cefec8c3986c8071c28cf6fb7c7bbd45830c579dcdbae8e6ec1676091320 
	I0127 03:18:39.050656  965412 cni.go:84] Creating CNI manager for "bridge"
	I0127 03:18:39.052074  965412 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 03:18:39.053180  965412 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 03:18:39.065430  965412 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 03:18:39.085517  965412 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 03:18:39.085626  965412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:18:39.085655  965412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-284111 minikube.k8s.io/updated_at=2025_01_27T03_18_39_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=6bb462d349d93b9bf1c5a4f87817e5e9ea11cc95 minikube.k8s.io/name=bridge-284111 minikube.k8s.io/primary=true
	I0127 03:18:39.236877  965412 ops.go:34] apiserver oom_adj: -16
	I0127 03:18:39.239687  965412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:18:39.739742  965412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:18:40.240439  965412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:18:40.740627  965412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:18:41.240543  965412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:18:41.740802  965412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:18:42.239814  965412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:18:42.740769  965412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:18:43.239766  965412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:18:43.362731  965412 kubeadm.go:1113] duration metric: took 4.27717357s to wait for elevateKubeSystemPrivileges
	I0127 03:18:43.362780  965412 kubeadm.go:394] duration metric: took 14.212089282s to StartCluster
	I0127 03:18:43.362819  965412 settings.go:142] acquiring lock: {Name:mk329e04da29656a59534015ea2d4cccfe5debac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:18:43.362902  965412 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20316-897624/kubeconfig
	I0127 03:18:43.364337  965412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/kubeconfig: {Name:mkcd87672341028e989830590061bc5270733978 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:18:43.364571  965412 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.178 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 03:18:43.364601  965412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0127 03:18:43.364623  965412 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 03:18:43.364821  965412 addons.go:69] Setting storage-provisioner=true in profile "bridge-284111"
	I0127 03:18:43.364832  965412 config.go:182] Loaded profile config "bridge-284111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 03:18:43.364844  965412 addons.go:238] Setting addon storage-provisioner=true in "bridge-284111"
	I0127 03:18:43.364884  965412 host.go:66] Checking if "bridge-284111" exists ...
	I0127 03:18:43.364893  965412 addons.go:69] Setting default-storageclass=true in profile "bridge-284111"
	I0127 03:18:43.364911  965412 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-284111"
	I0127 03:18:43.365434  965412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:18:43.365478  965412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:18:43.365434  965412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:18:43.365586  965412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:18:43.366316  965412 out.go:177] * Verifying Kubernetes components...
	I0127 03:18:43.367578  965412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 03:18:43.382144  965412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34325
	I0127 03:18:43.382166  965412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33037
	I0127 03:18:43.382709  965412 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:18:43.382710  965412 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:18:43.383321  965412 main.go:141] libmachine: Using API Version  1
	I0127 03:18:43.383343  965412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:18:43.383326  965412 main.go:141] libmachine: Using API Version  1
	I0127 03:18:43.383448  965412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:18:43.383802  965412 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:18:43.383802  965412 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:18:43.384068  965412 main.go:141] libmachine: (bridge-284111) Calling .GetState
	I0127 03:18:43.384497  965412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:18:43.384547  965412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:18:43.388396  965412 addons.go:238] Setting addon default-storageclass=true in "bridge-284111"
	I0127 03:18:43.388448  965412 host.go:66] Checking if "bridge-284111" exists ...
	I0127 03:18:43.388836  965412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:18:43.388888  965412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:18:43.401487  965412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34511
	I0127 03:18:43.401963  965412 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:18:43.402532  965412 main.go:141] libmachine: Using API Version  1
	I0127 03:18:43.402555  965412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:18:43.402948  965412 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:18:43.403176  965412 main.go:141] libmachine: (bridge-284111) Calling .GetState
	I0127 03:18:43.405227  965412 main.go:141] libmachine: (bridge-284111) Calling .DriverName
	I0127 03:18:43.406011  965412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43783
	I0127 03:18:43.406386  965412 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:18:43.406864  965412 main.go:141] libmachine: Using API Version  1
	I0127 03:18:43.406895  965412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:18:43.407221  965412 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:18:43.407649  965412 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 03:18:43.407895  965412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:18:43.407952  965412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:18:43.409292  965412 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 03:18:43.409316  965412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 03:18:43.409339  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHHostname
	I0127 03:18:43.413101  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:43.413591  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:43.413629  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:43.414006  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHPort
	I0127 03:18:43.414216  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:43.414393  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHUsername
	I0127 03:18:43.414580  965412 sshutil.go:53] new ssh client: &{IP:192.168.61.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/bridge-284111/id_rsa Username:docker}
	I0127 03:18:43.427369  965412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44983
	I0127 03:18:43.427939  965412 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:18:43.429588  965412 main.go:141] libmachine: Using API Version  1
	I0127 03:18:43.429624  965412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:18:43.430052  965412 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:18:43.430287  965412 main.go:141] libmachine: (bridge-284111) Calling .GetState
	I0127 03:18:43.432335  965412 main.go:141] libmachine: (bridge-284111) Calling .DriverName
	I0127 03:18:43.432595  965412 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 03:18:43.432622  965412 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 03:18:43.432642  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHHostname
	I0127 03:18:43.436101  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:43.436528  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:43.436573  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:43.436690  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHPort
	I0127 03:18:43.436907  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:43.437126  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHUsername
	I0127 03:18:43.437286  965412 sshutil.go:53] new ssh client: &{IP:192.168.61.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/bridge-284111/id_rsa Username:docker}
	I0127 03:18:43.623874  965412 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 03:18:43.623927  965412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0127 03:18:43.650661  965412 node_ready.go:35] waiting up to 15m0s for node "bridge-284111" to be "Ready" ...
	I0127 03:18:43.667546  965412 node_ready.go:49] node "bridge-284111" has status "Ready":"True"
	I0127 03:18:43.667583  965412 node_ready.go:38] duration metric: took 16.886127ms for node "bridge-284111" to be "Ready" ...
	I0127 03:18:43.667599  965412 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:18:43.687207  965412 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-gmvqc" in "kube-system" namespace to be "Ready" ...
	I0127 03:18:43.743454  965412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 03:18:43.814389  965412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 03:18:44.280907  965412 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0127 03:18:44.793593  965412 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-284111" context rescaled to 1 replicas
	I0127 03:18:44.833718  965412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.09022136s)
	I0127 03:18:44.833772  965412 main.go:141] libmachine: Making call to close driver server
	I0127 03:18:44.833809  965412 main.go:141] libmachine: (bridge-284111) Calling .Close
	I0127 03:18:44.833861  965412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.019432049s)
	I0127 03:18:44.833920  965412 main.go:141] libmachine: Making call to close driver server
	I0127 03:18:44.833938  965412 main.go:141] libmachine: (bridge-284111) Calling .Close
	I0127 03:18:44.834133  965412 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:18:44.834152  965412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:18:44.834178  965412 main.go:141] libmachine: Making call to close driver server
	I0127 03:18:44.834186  965412 main.go:141] libmachine: (bridge-284111) Calling .Close
	I0127 03:18:44.834409  965412 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:18:44.834427  965412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:18:44.834450  965412 main.go:141] libmachine: Making call to close driver server
	I0127 03:18:44.834446  965412 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:18:44.834458  965412 main.go:141] libmachine: (bridge-284111) Calling .Close
	I0127 03:18:44.834464  965412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:18:44.834668  965412 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:18:44.834701  965412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:18:44.848046  965412 main.go:141] libmachine: Making call to close driver server
	I0127 03:18:44.848123  965412 main.go:141] libmachine: (bridge-284111) Calling .Close
	I0127 03:18:44.849692  965412 main.go:141] libmachine: (bridge-284111) DBG | Closing plugin on server side
	I0127 03:18:44.849714  965412 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:18:44.849724  965412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:18:44.852448  965412 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0127 03:18:44.853648  965412 addons.go:514] duration metric: took 1.489024932s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0127 03:18:45.694816  965412 pod_ready.go:103] pod "coredns-668d6bf9bc-gmvqc" in "kube-system" namespace has status "Ready":"False"
	I0127 03:18:46.193044  965412 pod_ready.go:93] pod "coredns-668d6bf9bc-gmvqc" in "kube-system" namespace has status "Ready":"True"
	I0127 03:18:46.193071  965412 pod_ready.go:82] duration metric: took 2.505825793s for pod "coredns-668d6bf9bc-gmvqc" in "kube-system" namespace to be "Ready" ...
	I0127 03:18:46.193081  965412 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-tngtp" in "kube-system" namespace to be "Ready" ...
	I0127 03:18:48.199298  965412 pod_ready.go:103] pod "coredns-668d6bf9bc-tngtp" in "kube-system" namespace has status "Ready":"False"
	I0127 03:18:50.699488  965412 pod_ready.go:103] pod "coredns-668d6bf9bc-tngtp" in "kube-system" namespace has status "Ready":"False"
	I0127 03:18:53.198865  965412 pod_ready.go:103] pod "coredns-668d6bf9bc-tngtp" in "kube-system" namespace has status "Ready":"False"
	I0127 03:18:55.199017  965412 pod_ready.go:98] pod "coredns-668d6bf9bc-tngtp" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 03:18:55 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 03:18:43 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 03:18:43 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 03:18:43 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 03:18:43 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.61.178 HostIPs:[{IP:192.168.61
.178}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2025-01-27 03:18:43 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-01-27 03:18:44 +0000 UTC,FinishedAt:2025-01-27 03:18:54 +0000 UTC,ContainerID:cri-o://60e562f4495ff75f3e610df943b3b20869f7e242b31f7da67bc814acae63373e,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://60e562f4495ff75f3e610df943b3b20869f7e242b31f7da67bc814acae63373e Started:0xc00208e700 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc00250df50} {Name:kube-api-access-qcgg5 MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc00250df60}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0127 03:18:55.199049  965412 pod_ready.go:82] duration metric: took 9.005962015s for pod "coredns-668d6bf9bc-tngtp" in "kube-system" namespace to be "Ready" ...
	E0127 03:18:55.199068  965412 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-668d6bf9bc-tngtp" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 03:18:55 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 03:18:43 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 03:18:43 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 03:18:43 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 03:18:43 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.6
1.178 HostIPs:[{IP:192.168.61.178}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2025-01-27 03:18:43 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-01-27 03:18:44 +0000 UTC,FinishedAt:2025-01-27 03:18:54 +0000 UTC,ContainerID:cri-o://60e562f4495ff75f3e610df943b3b20869f7e242b31f7da67bc814acae63373e,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://60e562f4495ff75f3e610df943b3b20869f7e242b31f7da67bc814acae63373e Started:0xc00208e700 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc00250df50} {Name:kube-api-access-qcgg5 MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc00250df60}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0127 03:18:55.199080  965412 pod_ready.go:79] waiting up to 15m0s for pod "etcd-bridge-284111" in "kube-system" namespace to be "Ready" ...
	I0127 03:18:55.203029  965412 pod_ready.go:93] pod "etcd-bridge-284111" in "kube-system" namespace has status "Ready":"True"
	I0127 03:18:55.203055  965412 pod_ready.go:82] duration metric: took 3.966832ms for pod "etcd-bridge-284111" in "kube-system" namespace to be "Ready" ...
	I0127 03:18:55.203069  965412 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-bridge-284111" in "kube-system" namespace to be "Ready" ...
	I0127 03:18:55.208264  965412 pod_ready.go:93] pod "kube-apiserver-bridge-284111" in "kube-system" namespace has status "Ready":"True"
	I0127 03:18:55.208286  965412 pod_ready.go:82] duration metric: took 5.209412ms for pod "kube-apiserver-bridge-284111" in "kube-system" namespace to be "Ready" ...
	I0127 03:18:55.208296  965412 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-bridge-284111" in "kube-system" namespace to be "Ready" ...
	I0127 03:18:55.215716  965412 pod_ready.go:93] pod "kube-controller-manager-bridge-284111" in "kube-system" namespace has status "Ready":"True"
	I0127 03:18:55.215737  965412 pod_ready.go:82] duration metric: took 7.434091ms for pod "kube-controller-manager-bridge-284111" in "kube-system" namespace to be "Ready" ...
	I0127 03:18:55.215747  965412 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-hrrdg" in "kube-system" namespace to be "Ready" ...
	I0127 03:18:55.220146  965412 pod_ready.go:93] pod "kube-proxy-hrrdg" in "kube-system" namespace has status "Ready":"True"
	I0127 03:18:55.220172  965412 pod_ready.go:82] duration metric: took 4.416975ms for pod "kube-proxy-hrrdg" in "kube-system" namespace to be "Ready" ...
	I0127 03:18:55.220184  965412 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-bridge-284111" in "kube-system" namespace to be "Ready" ...
	I0127 03:18:55.601116  965412 pod_ready.go:93] pod "kube-scheduler-bridge-284111" in "kube-system" namespace has status "Ready":"True"
	I0127 03:18:55.601153  965412 pod_ready.go:82] duration metric: took 380.959358ms for pod "kube-scheduler-bridge-284111" in "kube-system" namespace to be "Ready" ...
	I0127 03:18:55.601167  965412 pod_ready.go:39] duration metric: took 11.933546372s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:18:55.601190  965412 api_server.go:52] waiting for apiserver process to appear ...
	I0127 03:18:55.601249  965412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:18:55.615311  965412 api_server.go:72] duration metric: took 12.250702622s to wait for apiserver process to appear ...
	I0127 03:18:55.615353  965412 api_server.go:88] waiting for apiserver healthz status ...
	I0127 03:18:55.615381  965412 api_server.go:253] Checking apiserver healthz at https://192.168.61.178:8443/healthz ...
	I0127 03:18:55.620633  965412 api_server.go:279] https://192.168.61.178:8443/healthz returned 200:
	ok
	I0127 03:18:55.621585  965412 api_server.go:141] control plane version: v1.32.1
	I0127 03:18:55.621610  965412 api_server.go:131] duration metric: took 6.249694ms to wait for apiserver health ...
	I0127 03:18:55.621618  965412 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 03:18:55.799117  965412 system_pods.go:59] 7 kube-system pods found
	I0127 03:18:55.799150  965412 system_pods.go:61] "coredns-668d6bf9bc-gmvqc" [7dc10376-b832-464e-b10c-89b6155e400a] Running
	I0127 03:18:55.799155  965412 system_pods.go:61] "etcd-bridge-284111" [f8ec6710-5283-4718-a4e5-986b10e7e9e4] Running
	I0127 03:18:55.799159  965412 system_pods.go:61] "kube-apiserver-bridge-284111" [a225e7f8-68a1-4504-8878-cb6ed04545b7] Running
	I0127 03:18:55.799163  965412 system_pods.go:61] "kube-controller-manager-bridge-284111" [a1562f85-9d4e-40bc-b33e-940d1c89fdeb] Running
	I0127 03:18:55.799166  965412 system_pods.go:61] "kube-proxy-hrrdg" [ee95d2f3-c1f4-4d76-a62f-d9e1d344948c] Running
	I0127 03:18:55.799170  965412 system_pods.go:61] "kube-scheduler-bridge-284111" [7a24aa20-3ad9-4968-8f58-512f9bc5d261] Running
	I0127 03:18:55.799173  965412 system_pods.go:61] "storage-provisioner" [bc4e4b69-bac8-4bab-a965-ac49ae78efe4] Running
	I0127 03:18:55.799180  965412 system_pods.go:74] duration metric: took 177.555316ms to wait for pod list to return data ...
	I0127 03:18:55.799187  965412 default_sa.go:34] waiting for default service account to be created ...
	I0127 03:18:55.996306  965412 default_sa.go:45] found service account: "default"
	I0127 03:18:55.996333  965412 default_sa.go:55] duration metric: took 197.140724ms for default service account to be created ...
	I0127 03:18:55.996343  965412 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 03:18:56.198691  965412 system_pods.go:87] 7 kube-system pods found
	I0127 03:18:56.397259  965412 system_pods.go:105] "coredns-668d6bf9bc-gmvqc" [7dc10376-b832-464e-b10c-89b6155e400a] Running
	I0127 03:18:56.397285  965412 system_pods.go:105] "etcd-bridge-284111" [f8ec6710-5283-4718-a4e5-986b10e7e9e4] Running
	I0127 03:18:56.397291  965412 system_pods.go:105] "kube-apiserver-bridge-284111" [a225e7f8-68a1-4504-8878-cb6ed04545b7] Running
	I0127 03:18:56.397296  965412 system_pods.go:105] "kube-controller-manager-bridge-284111" [a1562f85-9d4e-40bc-b33e-940d1c89fdeb] Running
	I0127 03:18:56.397302  965412 system_pods.go:105] "kube-proxy-hrrdg" [ee95d2f3-c1f4-4d76-a62f-d9e1d344948c] Running
	I0127 03:18:56.397306  965412 system_pods.go:105] "kube-scheduler-bridge-284111" [7a24aa20-3ad9-4968-8f58-512f9bc5d261] Running
	I0127 03:18:56.397310  965412 system_pods.go:105] "storage-provisioner" [bc4e4b69-bac8-4bab-a965-ac49ae78efe4] Running
	I0127 03:18:56.397318  965412 system_pods.go:147] duration metric: took 400.968435ms to wait for k8s-apps to be running ...
	I0127 03:18:56.397325  965412 system_svc.go:44] waiting for kubelet service to be running ....
	I0127 03:18:56.397373  965412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 03:18:56.413149  965412 system_svc.go:56] duration metric: took 15.80669ms WaitForService to wait for kubelet
	I0127 03:18:56.413188  965412 kubeadm.go:582] duration metric: took 13.048583267s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 03:18:56.413230  965412 node_conditions.go:102] verifying NodePressure condition ...
	I0127 03:18:56.596472  965412 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 03:18:56.596506  965412 node_conditions.go:123] node cpu capacity is 2
	I0127 03:18:56.596519  965412 node_conditions.go:105] duration metric: took 183.283498ms to run NodePressure ...
	I0127 03:18:56.596532  965412 start.go:241] waiting for startup goroutines ...
	I0127 03:18:56.596538  965412 start.go:246] waiting for cluster config update ...
	I0127 03:18:56.596548  965412 start.go:255] writing updated cluster config ...
	I0127 03:18:56.596809  965412 ssh_runner.go:195] Run: rm -f paused
	I0127 03:18:56.647143  965412 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 03:18:56.649661  965412 out.go:177] * Done! kubectl is now configured to use "bridge-284111" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 27 03:23:50 no-preload-844432 crio[722]: time="2025-01-27 03:23:50.393224105Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737948230393200963,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d97fbc05-72cd-4b3a-9e91-3d0336fa3db2 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 03:23:50 no-preload-844432 crio[722]: time="2025-01-27 03:23:50.393989467Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d400bf10-f813-4b51-bab4-87178bf7b5a1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:23:50 no-preload-844432 crio[722]: time="2025-01-27 03:23:50.394040577Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d400bf10-f813-4b51-bab4-87178bf7b5a1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:23:50 no-preload-844432 crio[722]: time="2025-01-27 03:23:50.397898606Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ea0ac346951fa4076a2ea94ff34faeb3067d6c5663e1cbd58b6856b241865095,PodSandboxId:6d95b507ad75fd04f36dd28075e2fe87ab7c71ea46cf04feca60ff417d60c6b3,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737947938167580854,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-lqzvl,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 8d1470dd-6c76-4fc9-8a99-3d7e9f5dc63c,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.c
ontainer.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f650dcd2e6a986e77e8e4329d18524fa2c285bae6a3957023cacf1918de89677,PodSandboxId:dbe7c3f11ff70d02f67456c817426e32414a47e45c9aacaba93ad77a2b772173,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737946997755822010,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-wckbj,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kuberne
tes.pod.uid: bff02018-1a04-4250-91ea-ebde97b9dcc4,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c97dacee000703d630d11751e65869f8307f0baea397fdad1317f634c0d82f6,PodSandboxId:e4e31383400ab8692affccaaf6688dbb6973f3f42b7aa756e2ae3a7ec0ceb860,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737946980590270114,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-4272c,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45e51dbd-1a87-409f-ba67-de135d11ca95,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c3d822f456262592b4de8c6d5d407e701d6d4205662e128df1874635a137def,PodSandboxId:8a7845848dd5a5efd8e54f6f7706f4c466e3f2812c69ca1c7265568a20251339,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13
f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737946980297466094,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-s258f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 173e92a9-28a3-4228-bca0-d3ea898cbdd9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:978d270bf91af3e6ed5c59f85253595cafd862823a852f61e03eab2e0bfc8d54,PodSandboxId:550eedd2696058cdde16d2abb8edeeae94323d2d6b4b09ab6320b710e9b71271,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,}
,Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737946980179787944,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d82e829-9a50-43e5-adcc-73447bc7ebf4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f7b15e74328d9ac8182ef2480b8152bc193199d6c97097b2448839fc800c4ae,PodSandboxId:a8cbe59283104a4e48c706bad76cf504e72c6d078199f6a323b85326ddaa2973,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Ima
ge:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737946978962085758,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mglcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4213ed0d-6c87-4e46-ae2e-84e14b30e2d6,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8104be558f9b5b8c84c385468b7c460f59c12108c21bf491c831ffb3d2c3bec4,PodSandboxId:eda0d65f4547411f891717d4d7eb4e3f8b847ce41e592daaf62ae871c205398f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad85
10e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737946967608936653,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-844432,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db8b0fc321a9c55651078cb69c212265,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37cb561902636e6921e2cde02a347a42a5aa57f65ee56cc4f688e8587cc0cdb9,PodSandboxId:e61998a7443a1bf95a274a6762529d9cdf88ce6f4ae7021b26f5ce77b98271bd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed764
2d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737946967639870690,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-844432,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aea6a49efd908d6743d0c8fc90dda07c,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e2584e1b50365dd078d77bf74bf44876b222ac322ad88bf702858aaf462dcec,PodSandboxId:37896a4abfeeac8abaf61dbf141dec87f3e1735f7e45b34e6c4a0a847d137b17,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1f
aa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737946967577536835,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-844432,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c71c45341680005910d4e0369df441,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b189f38bab3b025fc90d9c3b8095dd1d6dff9e3e94b698a43835e9625785ab97,PodSandboxId:8aff4afc635ce153c6edfc2f2fb8f9b62567e750e3c6fc8f0b7f8dbf484cd14b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737946967480636197,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-844432,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3c43c0a30cf9e4f62f0888a5ecf64ae,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5dd371e697c2a817a6f5bbe47507779bcb0efff1f8c0a806fb0ed27b7e0aa3b,PodSandboxId:5c26568a9dbab7e02faec968f2d90919b5dc824e3ee1f052c97a68cbd086fc18,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737946669798367477,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-844432,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3c43c0a30cf9e4f62f0888a5ecf64ae,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d400bf10-f813-4b51-bab4-87178bf7b5a1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:23:50 no-preload-844432 crio[722]: time="2025-01-27 03:23:50.436989439Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=823f47bf-6353-45d9-9c9a-7f271264b6d2 name=/runtime.v1.RuntimeService/Version
	Jan 27 03:23:50 no-preload-844432 crio[722]: time="2025-01-27 03:23:50.437117231Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=823f47bf-6353-45d9-9c9a-7f271264b6d2 name=/runtime.v1.RuntimeService/Version
	Jan 27 03:23:50 no-preload-844432 crio[722]: time="2025-01-27 03:23:50.437897156Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1acc39df-c3f2-4874-8306-d4a4f3be67d9 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 03:23:50 no-preload-844432 crio[722]: time="2025-01-27 03:23:50.438362081Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737948230438337701,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1acc39df-c3f2-4874-8306-d4a4f3be67d9 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 03:23:50 no-preload-844432 crio[722]: time="2025-01-27 03:23:50.438893269Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1ba3733c-d676-4871-8b7e-3e82cb9fd350 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:23:50 no-preload-844432 crio[722]: time="2025-01-27 03:23:50.438942332Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1ba3733c-d676-4871-8b7e-3e82cb9fd350 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:23:50 no-preload-844432 crio[722]: time="2025-01-27 03:23:50.439267312Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ea0ac346951fa4076a2ea94ff34faeb3067d6c5663e1cbd58b6856b241865095,PodSandboxId:6d95b507ad75fd04f36dd28075e2fe87ab7c71ea46cf04feca60ff417d60c6b3,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737947938167580854,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-lqzvl,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 8d1470dd-6c76-4fc9-8a99-3d7e9f5dc63c,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.c
ontainer.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f650dcd2e6a986e77e8e4329d18524fa2c285bae6a3957023cacf1918de89677,PodSandboxId:dbe7c3f11ff70d02f67456c817426e32414a47e45c9aacaba93ad77a2b772173,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737946997755822010,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-wckbj,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kuberne
tes.pod.uid: bff02018-1a04-4250-91ea-ebde97b9dcc4,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c97dacee000703d630d11751e65869f8307f0baea397fdad1317f634c0d82f6,PodSandboxId:e4e31383400ab8692affccaaf6688dbb6973f3f42b7aa756e2ae3a7ec0ceb860,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737946980590270114,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-4272c,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45e51dbd-1a87-409f-ba67-de135d11ca95,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c3d822f456262592b4de8c6d5d407e701d6d4205662e128df1874635a137def,PodSandboxId:8a7845848dd5a5efd8e54f6f7706f4c466e3f2812c69ca1c7265568a20251339,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13
f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737946980297466094,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-s258f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 173e92a9-28a3-4228-bca0-d3ea898cbdd9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:978d270bf91af3e6ed5c59f85253595cafd862823a852f61e03eab2e0bfc8d54,PodSandboxId:550eedd2696058cdde16d2abb8edeeae94323d2d6b4b09ab6320b710e9b71271,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,}
,Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737946980179787944,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d82e829-9a50-43e5-adcc-73447bc7ebf4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f7b15e74328d9ac8182ef2480b8152bc193199d6c97097b2448839fc800c4ae,PodSandboxId:a8cbe59283104a4e48c706bad76cf504e72c6d078199f6a323b85326ddaa2973,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Ima
ge:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737946978962085758,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mglcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4213ed0d-6c87-4e46-ae2e-84e14b30e2d6,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8104be558f9b5b8c84c385468b7c460f59c12108c21bf491c831ffb3d2c3bec4,PodSandboxId:eda0d65f4547411f891717d4d7eb4e3f8b847ce41e592daaf62ae871c205398f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad85
10e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737946967608936653,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-844432,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db8b0fc321a9c55651078cb69c212265,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37cb561902636e6921e2cde02a347a42a5aa57f65ee56cc4f688e8587cc0cdb9,PodSandboxId:e61998a7443a1bf95a274a6762529d9cdf88ce6f4ae7021b26f5ce77b98271bd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed764
2d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737946967639870690,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-844432,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aea6a49efd908d6743d0c8fc90dda07c,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e2584e1b50365dd078d77bf74bf44876b222ac322ad88bf702858aaf462dcec,PodSandboxId:37896a4abfeeac8abaf61dbf141dec87f3e1735f7e45b34e6c4a0a847d137b17,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1f
aa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737946967577536835,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-844432,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c71c45341680005910d4e0369df441,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b189f38bab3b025fc90d9c3b8095dd1d6dff9e3e94b698a43835e9625785ab97,PodSandboxId:8aff4afc635ce153c6edfc2f2fb8f9b62567e750e3c6fc8f0b7f8dbf484cd14b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737946967480636197,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-844432,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3c43c0a30cf9e4f62f0888a5ecf64ae,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5dd371e697c2a817a6f5bbe47507779bcb0efff1f8c0a806fb0ed27b7e0aa3b,PodSandboxId:5c26568a9dbab7e02faec968f2d90919b5dc824e3ee1f052c97a68cbd086fc18,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737946669798367477,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-844432,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3c43c0a30cf9e4f62f0888a5ecf64ae,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1ba3733c-d676-4871-8b7e-3e82cb9fd350 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:23:50 no-preload-844432 crio[722]: time="2025-01-27 03:23:50.472983715Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=61c0fa38-ba88-407d-abbd-3381d7100038 name=/runtime.v1.RuntimeService/Version
	Jan 27 03:23:50 no-preload-844432 crio[722]: time="2025-01-27 03:23:50.473076286Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=61c0fa38-ba88-407d-abbd-3381d7100038 name=/runtime.v1.RuntimeService/Version
	Jan 27 03:23:50 no-preload-844432 crio[722]: time="2025-01-27 03:23:50.474370781Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3de8ed5e-b460-4346-befb-8f7dde084e74 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 03:23:50 no-preload-844432 crio[722]: time="2025-01-27 03:23:50.474807534Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737948230474784358,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3de8ed5e-b460-4346-befb-8f7dde084e74 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 03:23:50 no-preload-844432 crio[722]: time="2025-01-27 03:23:50.475483814Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c354b5f3-9b8f-4e85-be09-2dad3d0f102f name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:23:50 no-preload-844432 crio[722]: time="2025-01-27 03:23:50.475546337Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c354b5f3-9b8f-4e85-be09-2dad3d0f102f name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:23:50 no-preload-844432 crio[722]: time="2025-01-27 03:23:50.475833011Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ea0ac346951fa4076a2ea94ff34faeb3067d6c5663e1cbd58b6856b241865095,PodSandboxId:6d95b507ad75fd04f36dd28075e2fe87ab7c71ea46cf04feca60ff417d60c6b3,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737947938167580854,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-lqzvl,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 8d1470dd-6c76-4fc9-8a99-3d7e9f5dc63c,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.c
ontainer.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f650dcd2e6a986e77e8e4329d18524fa2c285bae6a3957023cacf1918de89677,PodSandboxId:dbe7c3f11ff70d02f67456c817426e32414a47e45c9aacaba93ad77a2b772173,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737946997755822010,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-wckbj,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kuberne
tes.pod.uid: bff02018-1a04-4250-91ea-ebde97b9dcc4,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c97dacee000703d630d11751e65869f8307f0baea397fdad1317f634c0d82f6,PodSandboxId:e4e31383400ab8692affccaaf6688dbb6973f3f42b7aa756e2ae3a7ec0ceb860,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737946980590270114,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-4272c,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45e51dbd-1a87-409f-ba67-de135d11ca95,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c3d822f456262592b4de8c6d5d407e701d6d4205662e128df1874635a137def,PodSandboxId:8a7845848dd5a5efd8e54f6f7706f4c466e3f2812c69ca1c7265568a20251339,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13
f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737946980297466094,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-s258f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 173e92a9-28a3-4228-bca0-d3ea898cbdd9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:978d270bf91af3e6ed5c59f85253595cafd862823a852f61e03eab2e0bfc8d54,PodSandboxId:550eedd2696058cdde16d2abb8edeeae94323d2d6b4b09ab6320b710e9b71271,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,}
,Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737946980179787944,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d82e829-9a50-43e5-adcc-73447bc7ebf4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f7b15e74328d9ac8182ef2480b8152bc193199d6c97097b2448839fc800c4ae,PodSandboxId:a8cbe59283104a4e48c706bad76cf504e72c6d078199f6a323b85326ddaa2973,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Ima
ge:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737946978962085758,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mglcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4213ed0d-6c87-4e46-ae2e-84e14b30e2d6,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8104be558f9b5b8c84c385468b7c460f59c12108c21bf491c831ffb3d2c3bec4,PodSandboxId:eda0d65f4547411f891717d4d7eb4e3f8b847ce41e592daaf62ae871c205398f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad85
10e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737946967608936653,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-844432,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db8b0fc321a9c55651078cb69c212265,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37cb561902636e6921e2cde02a347a42a5aa57f65ee56cc4f688e8587cc0cdb9,PodSandboxId:e61998a7443a1bf95a274a6762529d9cdf88ce6f4ae7021b26f5ce77b98271bd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed764
2d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737946967639870690,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-844432,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aea6a49efd908d6743d0c8fc90dda07c,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e2584e1b50365dd078d77bf74bf44876b222ac322ad88bf702858aaf462dcec,PodSandboxId:37896a4abfeeac8abaf61dbf141dec87f3e1735f7e45b34e6c4a0a847d137b17,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1f
aa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737946967577536835,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-844432,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c71c45341680005910d4e0369df441,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b189f38bab3b025fc90d9c3b8095dd1d6dff9e3e94b698a43835e9625785ab97,PodSandboxId:8aff4afc635ce153c6edfc2f2fb8f9b62567e750e3c6fc8f0b7f8dbf484cd14b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737946967480636197,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-844432,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3c43c0a30cf9e4f62f0888a5ecf64ae,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5dd371e697c2a817a6f5bbe47507779bcb0efff1f8c0a806fb0ed27b7e0aa3b,PodSandboxId:5c26568a9dbab7e02faec968f2d90919b5dc824e3ee1f052c97a68cbd086fc18,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737946669798367477,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-844432,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3c43c0a30cf9e4f62f0888a5ecf64ae,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c354b5f3-9b8f-4e85-be09-2dad3d0f102f name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:23:50 no-preload-844432 crio[722]: time="2025-01-27 03:23:50.513566742Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ac9ec75a-95a8-433f-86aa-f6a5c6d51cc8 name=/runtime.v1.RuntimeService/Version
	Jan 27 03:23:50 no-preload-844432 crio[722]: time="2025-01-27 03:23:50.513639245Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ac9ec75a-95a8-433f-86aa-f6a5c6d51cc8 name=/runtime.v1.RuntimeService/Version
	Jan 27 03:23:50 no-preload-844432 crio[722]: time="2025-01-27 03:23:50.514850461Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d1c93e44-44d3-4c40-a7bb-2ace5e59ba1c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 03:23:50 no-preload-844432 crio[722]: time="2025-01-27 03:23:50.515249926Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737948230515229196,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d1c93e44-44d3-4c40-a7bb-2ace5e59ba1c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 03:23:50 no-preload-844432 crio[722]: time="2025-01-27 03:23:50.515799256Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bb2a7481-250d-4ad3-bef5-87a786fabaed name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:23:50 no-preload-844432 crio[722]: time="2025-01-27 03:23:50.515849179Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bb2a7481-250d-4ad3-bef5-87a786fabaed name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:23:50 no-preload-844432 crio[722]: time="2025-01-27 03:23:50.516060996Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ea0ac346951fa4076a2ea94ff34faeb3067d6c5663e1cbd58b6856b241865095,PodSandboxId:6d95b507ad75fd04f36dd28075e2fe87ab7c71ea46cf04feca60ff417d60c6b3,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737947938167580854,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-lqzvl,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 8d1470dd-6c76-4fc9-8a99-3d7e9f5dc63c,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.c
ontainer.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f650dcd2e6a986e77e8e4329d18524fa2c285bae6a3957023cacf1918de89677,PodSandboxId:dbe7c3f11ff70d02f67456c817426e32414a47e45c9aacaba93ad77a2b772173,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737946997755822010,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-wckbj,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kuberne
tes.pod.uid: bff02018-1a04-4250-91ea-ebde97b9dcc4,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c97dacee000703d630d11751e65869f8307f0baea397fdad1317f634c0d82f6,PodSandboxId:e4e31383400ab8692affccaaf6688dbb6973f3f42b7aa756e2ae3a7ec0ceb860,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737946980590270114,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-4272c,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45e51dbd-1a87-409f-ba67-de135d11ca95,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c3d822f456262592b4de8c6d5d407e701d6d4205662e128df1874635a137def,PodSandboxId:8a7845848dd5a5efd8e54f6f7706f4c466e3f2812c69ca1c7265568a20251339,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13
f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737946980297466094,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-s258f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 173e92a9-28a3-4228-bca0-d3ea898cbdd9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:978d270bf91af3e6ed5c59f85253595cafd862823a852f61e03eab2e0bfc8d54,PodSandboxId:550eedd2696058cdde16d2abb8edeeae94323d2d6b4b09ab6320b710e9b71271,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,}
,Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737946980179787944,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d82e829-9a50-43e5-adcc-73447bc7ebf4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f7b15e74328d9ac8182ef2480b8152bc193199d6c97097b2448839fc800c4ae,PodSandboxId:a8cbe59283104a4e48c706bad76cf504e72c6d078199f6a323b85326ddaa2973,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Ima
ge:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737946978962085758,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mglcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4213ed0d-6c87-4e46-ae2e-84e14b30e2d6,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8104be558f9b5b8c84c385468b7c460f59c12108c21bf491c831ffb3d2c3bec4,PodSandboxId:eda0d65f4547411f891717d4d7eb4e3f8b847ce41e592daaf62ae871c205398f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad85
10e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737946967608936653,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-844432,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db8b0fc321a9c55651078cb69c212265,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37cb561902636e6921e2cde02a347a42a5aa57f65ee56cc4f688e8587cc0cdb9,PodSandboxId:e61998a7443a1bf95a274a6762529d9cdf88ce6f4ae7021b26f5ce77b98271bd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed764
2d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737946967639870690,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-844432,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aea6a49efd908d6743d0c8fc90dda07c,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e2584e1b50365dd078d77bf74bf44876b222ac322ad88bf702858aaf462dcec,PodSandboxId:37896a4abfeeac8abaf61dbf141dec87f3e1735f7e45b34e6c4a0a847d137b17,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1f
aa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737946967577536835,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-844432,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c71c45341680005910d4e0369df441,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b189f38bab3b025fc90d9c3b8095dd1d6dff9e3e94b698a43835e9625785ab97,PodSandboxId:8aff4afc635ce153c6edfc2f2fb8f9b62567e750e3c6fc8f0b7f8dbf484cd14b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737946967480636197,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-844432,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3c43c0a30cf9e4f62f0888a5ecf64ae,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5dd371e697c2a817a6f5bbe47507779bcb0efff1f8c0a806fb0ed27b7e0aa3b,PodSandboxId:5c26568a9dbab7e02faec968f2d90919b5dc824e3ee1f052c97a68cbd086fc18,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737946669798367477,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-844432,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3c43c0a30cf9e4f62f0888a5ecf64ae,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bb2a7481-250d-4ad3-bef5-87a786fabaed name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	ea0ac346951fa       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           4 minutes ago       Exited              dashboard-metrics-scraper   8                   6d95b507ad75f       dashboard-metrics-scraper-86c6bf9756-lqzvl
	f650dcd2e6a98       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93   20 minutes ago      Running             kubernetes-dashboard        0                   dbe7c3f11ff70       kubernetes-dashboard-7779f9b69b-wckbj
	4c97dacee0007       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                           20 minutes ago      Running             coredns                     0                   e4e31383400ab       coredns-668d6bf9bc-4272c
	6c3d822f45626       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                           20 minutes ago      Running             coredns                     0                   8a7845848dd5a       coredns-668d6bf9bc-s258f
	978d270bf91af       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 minutes ago      Running             storage-provisioner         0                   550eedd269605       storage-provisioner
	8f7b15e74328d       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a                                           20 minutes ago      Running             kube-proxy                  0                   a8cbe59283104       kube-proxy-mglcq
	37cb561902636       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35                                           21 minutes ago      Running             kube-controller-manager     3                   e61998a7443a1       kube-controller-manager-no-preload-844432
	8104be558f9b5       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                           21 minutes ago      Running             etcd                        2                   eda0d65f45474       etcd-no-preload-844432
	8e2584e1b5036       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1                                           21 minutes ago      Running             kube-scheduler              2                   37896a4abfeea       kube-scheduler-no-preload-844432
	b189f38bab3b0       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                           21 minutes ago      Running             kube-apiserver              3                   8aff4afc635ce       kube-apiserver-no-preload-844432
	b5dd371e697c2       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                           26 minutes ago      Exited              kube-apiserver              2                   5c26568a9dbab       kube-apiserver-no-preload-844432
	
	
	==> coredns [4c97dacee000703d630d11751e65869f8307f0baea397fdad1317f634c0d82f6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [6c3d822f456262592b4de8c6d5d407e701d6d4205662e128df1874635a137def] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               no-preload-844432
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-844432
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6bb462d349d93b9bf1c5a4f87817e5e9ea11cc95
	                    minikube.k8s.io/name=no-preload-844432
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T03_02_53_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 03:02:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-844432
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 03:23:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 03:19:01 +0000   Mon, 27 Jan 2025 03:02:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 03:19:01 +0000   Mon, 27 Jan 2025 03:02:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 03:19:01 +0000   Mon, 27 Jan 2025 03:02:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 03:19:01 +0000   Mon, 27 Jan 2025 03:02:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.144
	  Hostname:    no-preload-844432
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cc911dc9346b4206a05beb5f741072e1
	  System UUID:                cc911dc9-346b-4206-a05b-eb5f741072e1
	  Boot ID:                    e7e05fdd-8b5c-4aad-a924-ef563d3b2608
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-4272c                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     20m
	  kube-system                 coredns-668d6bf9bc-s258f                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     20m
	  kube-system                 etcd-no-preload-844432                        100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         20m
	  kube-system                 kube-apiserver-no-preload-844432              250m (12%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-controller-manager-no-preload-844432     200m (10%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-proxy-mglcq                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-scheduler-no-preload-844432              100m (5%)     0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 metrics-server-f79f97bbb-ml7kw                100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         20m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kubernetes-dashboard        dashboard-metrics-scraper-86c6bf9756-lqzvl    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-wckbj         0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 20m                kube-proxy       
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x3 over 21m)  kubelet          Node no-preload-844432 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x2 over 21m)  kubelet          Node no-preload-844432 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x2 over 21m)  kubelet          Node no-preload-844432 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  20m                kubelet          Node no-preload-844432 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m                kubelet          Node no-preload-844432 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m                kubelet          Node no-preload-844432 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           20m                node-controller  Node no-preload-844432 event: Registered Node no-preload-844432 in Controller
	
	
	==> dmesg <==
	[  +4.862986] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.976028] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[Jan27 02:57] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.480040] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.062626] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062370] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +0.190153] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +0.112888] systemd-fstab-generator[683]: Ignoring "noauto" option for root device
	[  +0.268692] systemd-fstab-generator[713]: Ignoring "noauto" option for root device
	[ +15.325160] systemd-fstab-generator[1317]: Ignoring "noauto" option for root device
	[  +0.061971] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.772819] systemd-fstab-generator[1441]: Ignoring "noauto" option for root device
	[  +4.088515] kauditd_printk_skb: 87 callbacks suppressed
	[Jan27 02:58] kauditd_printk_skb: 57 callbacks suppressed
	[  +6.236461] kauditd_printk_skb: 25 callbacks suppressed
	[Jan27 03:02] systemd-fstab-generator[3331]: Ignoring "noauto" option for root device
	[  +0.078601] kauditd_printk_skb: 10 callbacks suppressed
	[  +6.518176] systemd-fstab-generator[3669]: Ignoring "noauto" option for root device
	[  +0.105137] kauditd_printk_skb: 55 callbacks suppressed
	[  +5.357839] systemd-fstab-generator[3804]: Ignoring "noauto" option for root device
	[  +0.162791] kauditd_printk_skb: 12 callbacks suppressed
	[Jan27 03:03] kauditd_printk_skb: 112 callbacks suppressed
	[ +11.617666] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [8104be558f9b5b8c84c385468b7c460f59c12108c21bf491c831ffb3d2c3bec4] <==
	{"level":"info","ts":"2025-01-27T03:12:49.090048Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":840,"took":"30.326895ms","hash":925267279,"current-db-size-bytes":2981888,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":2981888,"current-db-size-in-use":"3.0 MB"}
	{"level":"info","ts":"2025-01-27T03:12:49.090123Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":925267279,"revision":840,"compact-revision":-1}
	{"level":"warn","ts":"2025-01-27T03:13:34.274562Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"177.955401ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T03:13:34.275008Z","caller":"traceutil/trace.go:171","msg":"trace[356970431] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1134; }","duration":"178.412808ms","start":"2025-01-27T03:13:34.096548Z","end":"2025-01-27T03:13:34.274960Z","steps":["trace[356970431] 'range keys from in-memory index tree'  (duration: 177.893906ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T03:13:34.274541Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"246.353794ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T03:13:34.275236Z","caller":"traceutil/trace.go:171","msg":"trace[87274084] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1134; }","duration":"247.170288ms","start":"2025-01-27T03:13:34.028052Z","end":"2025-01-27T03:13:34.275222Z","steps":["trace[87274084] 'range keys from in-memory index tree'  (duration: 246.265069ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T03:13:37.679211Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.049922ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15328446246179321787 > lease_revoke:<id:54b994a5b630372c>","response":"size:29"}
	{"level":"info","ts":"2025-01-27T03:15:16.658549Z","caller":"traceutil/trace.go:171","msg":"trace[491579303] transaction","detail":"{read_only:false; response_revision:1223; number_of_response:1; }","duration":"100.252577ms","start":"2025-01-27T03:15:16.558118Z","end":"2025-01-27T03:15:16.658371Z","steps":["trace[491579303] 'process raft request'  (duration: 99.799555ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T03:15:17.819493Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.177348ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15328446246179322792 > lease_revoke:<id:54b994a5b6303b13>","response":"size:29"}
	{"level":"info","ts":"2025-01-27T03:17:49.067989Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1092}
	{"level":"info","ts":"2025-01-27T03:17:49.074853Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1092,"took":"5.800377ms","hash":3344204014,"current-db-size-bytes":2981888,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":1818624,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-01-27T03:17:49.074978Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3344204014,"revision":1092,"compact-revision":840}
	{"level":"info","ts":"2025-01-27T03:18:29.778199Z","caller":"traceutil/trace.go:171","msg":"trace[1302601977] linearizableReadLoop","detail":"{readStateIndex:1586; appliedIndex:1585; }","duration":"366.668083ms","start":"2025-01-27T03:18:29.411476Z","end":"2025-01-27T03:18:29.778144Z","steps":["trace[1302601977] 'read index received'  (duration: 366.467861ms)","trace[1302601977] 'applied index is now lower than readState.Index'  (duration: 199.061µs)"],"step_count":2}
	{"level":"info","ts":"2025-01-27T03:18:29.778376Z","caller":"traceutil/trace.go:171","msg":"trace[1597731141] transaction","detail":"{read_only:false; response_revision:1379; number_of_response:1; }","duration":"431.745492ms","start":"2025-01-27T03:18:29.346618Z","end":"2025-01-27T03:18:29.778363Z","steps":["trace[1597731141] 'process raft request'  (duration: 431.367377ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T03:18:29.778644Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T03:18:29.346599Z","time spent":"431.807528ms","remote":"127.0.0.1:54828","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":682,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-rocr4kpjdznexqns5t327qzjsu\" mod_revision:1371 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-rocr4kpjdznexqns5t327qzjsu\" value_size:609 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-rocr4kpjdznexqns5t327qzjsu\" > >"}
	{"level":"warn","ts":"2025-01-27T03:18:29.779005Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"367.514056ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T03:18:29.780666Z","caller":"traceutil/trace.go:171","msg":"trace[914750142] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1379; }","duration":"369.195571ms","start":"2025-01-27T03:18:29.411447Z","end":"2025-01-27T03:18:29.780643Z","steps":["trace[914750142] 'agreement among raft nodes before linearized reading'  (duration: 367.512875ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T03:18:29.780826Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T03:18:29.411432Z","time spent":"369.374964ms","remote":"127.0.0.1:54750","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2025-01-27T03:18:29.929986Z","caller":"traceutil/trace.go:171","msg":"trace[1129911564] linearizableReadLoop","detail":"{readStateIndex:1587; appliedIndex:1586; }","duration":"144.824847ms","start":"2025-01-27T03:18:29.785146Z","end":"2025-01-27T03:18:29.929971Z","steps":["trace[1129911564] 'read index received'  (duration: 121.712663ms)","trace[1129911564] 'applied index is now lower than readState.Index'  (duration: 23.111642ms)"],"step_count":2}
	{"level":"info","ts":"2025-01-27T03:18:29.930080Z","caller":"traceutil/trace.go:171","msg":"trace[1974754168] transaction","detail":"{read_only:false; response_revision:1380; number_of_response:1; }","duration":"145.403702ms","start":"2025-01-27T03:18:29.784669Z","end":"2025-01-27T03:18:29.930072Z","steps":["trace[1974754168] 'process raft request'  (duration: 122.248324ms)","trace[1974754168] 'compare'  (duration: 22.874245ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T03:18:29.930316Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"145.113793ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T03:18:29.930371Z","caller":"traceutil/trace.go:171","msg":"trace[1606579924] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1380; }","duration":"145.291567ms","start":"2025-01-27T03:18:29.785068Z","end":"2025-01-27T03:18:29.930359Z","steps":["trace[1606579924] 'agreement among raft nodes before linearized reading'  (duration: 145.141518ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T03:22:49.074890Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1344}
	{"level":"info","ts":"2025-01-27T03:22:49.079293Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1344,"took":"4.016211ms","hash":595776626,"current-db-size-bytes":2981888,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":1814528,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-01-27T03:22:49.079350Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":595776626,"revision":1344,"compact-revision":1092}
	
	
	==> kernel <==
	 03:23:50 up 26 min,  0 users,  load average: 0.01, 0.07, 0.12
	Linux no-preload-844432 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b189f38bab3b025fc90d9c3b8095dd1d6dff9e3e94b698a43835e9625785ab97] <==
	I0127 03:18:51.726821       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 03:18:51.727790       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 03:20:51.727949       1 handler_proxy.go:99] no RequestInfo found in the context
	W0127 03:20:51.727941       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 03:20:51.728313       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0127 03:20:51.728374       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0127 03:20:51.729516       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 03:20:51.729577       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 03:22:50.728049       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 03:22:50.728410       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0127 03:22:51.729919       1 handler_proxy.go:99] no RequestInfo found in the context
	W0127 03:22:51.729919       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 03:22:51.730172       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0127 03:22:51.730251       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0127 03:22:51.731340       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 03:22:51.731431       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [b5dd371e697c2a817a6f5bbe47507779bcb0efff1f8c0a806fb0ed27b7e0aa3b] <==
	W0127 03:02:40.166395       1 logging.go:55] [core] [Channel #29 SubChannel #30]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 03:02:40.254017       1 logging.go:55] [core] [Channel #8 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 03:02:40.284228       1 logging.go:55] [core] [Channel #146 SubChannel #147]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 03:02:40.300993       1 logging.go:55] [core] [Channel #107 SubChannel #108]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 03:02:40.319826       1 logging.go:55] [core] [Channel #95 SubChannel #96]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 03:02:40.334399       1 logging.go:55] [core] [Channel #86 SubChannel #87]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 03:02:40.411540       1 logging.go:55] [core] [Channel #110 SubChannel #111]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 03:02:40.415999       1 logging.go:55] [core] [Channel #20 SubChannel #21]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 03:02:40.439084       1 logging.go:55] [core] [Channel #71 SubChannel #72]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 03:02:40.549161       1 logging.go:55] [core] [Channel #32 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 03:02:40.563806       1 logging.go:55] [core] [Channel #104 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 03:02:40.685967       1 logging.go:55] [core] [Channel #44 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 03:02:40.695549       1 logging.go:55] [core] [Channel #161 SubChannel #162]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 03:02:40.716431       1 logging.go:55] [core] [Channel #149 SubChannel #150]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 03:02:40.840184       1 logging.go:55] [core] [Channel #26 SubChannel #27]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 03:02:40.886630       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 03:02:40.940492       1 logging.go:55] [core] [Channel #74 SubChannel #75]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 03:02:41.026294       1 logging.go:55] [core] [Channel #158 SubChannel #159]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 03:02:41.035001       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 03:02:42.552371       1 logging.go:55] [core] [Channel #202 SubChannel #203]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 03:02:44.590221       1 logging.go:55] [core] [Channel #116 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 03:02:44.625547       1 logging.go:55] [core] [Channel #23 SubChannel #24]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 03:02:44.661634       1 logging.go:55] [core] [Channel #53 SubChannel #54]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 03:02:44.803998       1 logging.go:55] [core] [Channel #65 SubChannel #66]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 03:02:44.885415       1 logging.go:55] [core] [Channel #143 SubChannel #144]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [37cb561902636e6921e2cde02a347a42a5aa57f65ee56cc4f688e8587cc0cdb9] <==
	I0127 03:18:55.175204       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="294.309µs"
	E0127 03:18:57.494887       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 03:18:57.551212       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 03:18:58.695229       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="104.974µs"
	I0127 03:19:00.250531       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="61.377µs"
	I0127 03:19:01.120116       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="no-preload-844432"
	I0127 03:19:09.171821       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="130.728µs"
	E0127 03:19:27.501639       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 03:19:27.558848       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 03:19:57.508823       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 03:19:57.566358       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 03:20:27.515547       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 03:20:27.577298       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 03:20:57.522416       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 03:20:57.585624       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 03:21:27.529852       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 03:21:27.594511       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 03:21:57.535800       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 03:21:57.605490       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 03:22:27.542891       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 03:22:27.613676       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 03:22:57.548968       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 03:22:57.620952       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 03:23:27.556761       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 03:23:27.630962       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [8f7b15e74328d9ac8182ef2480b8152bc193199d6c97097b2448839fc800c4ae] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 03:02:59.693947       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 03:02:59.764330       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.72.144"]
	E0127 03:02:59.764439       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 03:03:00.159518       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 03:03:00.159589       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 03:03:00.159621       1 server_linux.go:170] "Using iptables Proxier"
	I0127 03:03:00.186274       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 03:03:00.186621       1 server.go:497] "Version info" version="v1.32.1"
	I0127 03:03:00.186646       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 03:03:00.192123       1 config.go:199] "Starting service config controller"
	I0127 03:03:00.192177       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 03:03:00.192196       1 config.go:105] "Starting endpoint slice config controller"
	I0127 03:03:00.192213       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 03:03:00.193154       1 config.go:329] "Starting node config controller"
	I0127 03:03:00.193187       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 03:03:00.319509       1 shared_informer.go:320] Caches are synced for node config
	I0127 03:03:00.319595       1 shared_informer.go:320] Caches are synced for service config
	I0127 03:03:00.319609       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [8e2584e1b50365dd078d77bf74bf44876b222ac322ad88bf702858aaf462dcec] <==
	W0127 03:02:51.614106       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0127 03:02:51.614235       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 03:02:51.700480       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0127 03:02:51.700581       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 03:02:51.716826       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0127 03:02:51.716877       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 03:02:51.743218       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0127 03:02:51.743308       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 03:02:51.745411       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0127 03:02:51.745449       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 03:02:51.840893       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0127 03:02:51.841027       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 03:02:51.893945       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0127 03:02:51.894074       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 03:02:51.897349       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0127 03:02:51.897460       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 03:02:51.914314       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0127 03:02:51.914362       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 03:02:51.935449       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0127 03:02:51.935570       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 03:02:51.938353       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0127 03:02:51.938437       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0127 03:02:52.032508       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0127 03:02:52.032588       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0127 03:02:54.700096       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 03:22:53 no-preload-844432 kubelet[3676]: E0127 03:22:53.601084    3676 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737948173598073519,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 03:22:55 no-preload-844432 kubelet[3676]: E0127 03:22:55.156008    3676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-ml7kw" podUID="ef8605e4-7be9-40a5-aa31-ee050f7a0f53"
	Jan 27 03:22:57 no-preload-844432 kubelet[3676]: I0127 03:22:57.153346    3676 scope.go:117] "RemoveContainer" containerID="ea0ac346951fa4076a2ea94ff34faeb3067d6c5663e1cbd58b6856b241865095"
	Jan 27 03:22:57 no-preload-844432 kubelet[3676]: E0127 03:22:57.153569    3676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-lqzvl_kubernetes-dashboard(8d1470dd-6c76-4fc9-8a99-3d7e9f5dc63c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-lqzvl" podUID="8d1470dd-6c76-4fc9-8a99-3d7e9f5dc63c"
	Jan 27 03:23:03 no-preload-844432 kubelet[3676]: E0127 03:23:03.602271    3676 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737948183602011459,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 03:23:03 no-preload-844432 kubelet[3676]: E0127 03:23:03.602527    3676 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737948183602011459,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 03:23:10 no-preload-844432 kubelet[3676]: E0127 03:23:10.155304    3676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-ml7kw" podUID="ef8605e4-7be9-40a5-aa31-ee050f7a0f53"
	Jan 27 03:23:11 no-preload-844432 kubelet[3676]: I0127 03:23:11.153242    3676 scope.go:117] "RemoveContainer" containerID="ea0ac346951fa4076a2ea94ff34faeb3067d6c5663e1cbd58b6856b241865095"
	Jan 27 03:23:11 no-preload-844432 kubelet[3676]: E0127 03:23:11.153542    3676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-lqzvl_kubernetes-dashboard(8d1470dd-6c76-4fc9-8a99-3d7e9f5dc63c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-lqzvl" podUID="8d1470dd-6c76-4fc9-8a99-3d7e9f5dc63c"
	Jan 27 03:23:13 no-preload-844432 kubelet[3676]: E0127 03:23:13.605911    3676 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737948193605251255,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 03:23:13 no-preload-844432 kubelet[3676]: E0127 03:23:13.607053    3676 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737948193605251255,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 03:23:22 no-preload-844432 kubelet[3676]: I0127 03:23:22.153516    3676 scope.go:117] "RemoveContainer" containerID="ea0ac346951fa4076a2ea94ff34faeb3067d6c5663e1cbd58b6856b241865095"
	Jan 27 03:23:22 no-preload-844432 kubelet[3676]: E0127 03:23:22.153847    3676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-lqzvl_kubernetes-dashboard(8d1470dd-6c76-4fc9-8a99-3d7e9f5dc63c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-lqzvl" podUID="8d1470dd-6c76-4fc9-8a99-3d7e9f5dc63c"
	Jan 27 03:23:23 no-preload-844432 kubelet[3676]: E0127 03:23:23.609074    3676 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737948203608505527,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 03:23:23 no-preload-844432 kubelet[3676]: E0127 03:23:23.610090    3676 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737948203608505527,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 03:23:25 no-preload-844432 kubelet[3676]: E0127 03:23:25.154621    3676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-ml7kw" podUID="ef8605e4-7be9-40a5-aa31-ee050f7a0f53"
	Jan 27 03:23:33 no-preload-844432 kubelet[3676]: E0127 03:23:33.612371    3676 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737948213611093847,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 03:23:33 no-preload-844432 kubelet[3676]: E0127 03:23:33.612455    3676 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737948213611093847,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 03:23:35 no-preload-844432 kubelet[3676]: I0127 03:23:35.153407    3676 scope.go:117] "RemoveContainer" containerID="ea0ac346951fa4076a2ea94ff34faeb3067d6c5663e1cbd58b6856b241865095"
	Jan 27 03:23:35 no-preload-844432 kubelet[3676]: E0127 03:23:35.154981    3676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-lqzvl_kubernetes-dashboard(8d1470dd-6c76-4fc9-8a99-3d7e9f5dc63c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-lqzvl" podUID="8d1470dd-6c76-4fc9-8a99-3d7e9f5dc63c"
	Jan 27 03:23:39 no-preload-844432 kubelet[3676]: E0127 03:23:39.155326    3676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-ml7kw" podUID="ef8605e4-7be9-40a5-aa31-ee050f7a0f53"
	Jan 27 03:23:43 no-preload-844432 kubelet[3676]: E0127 03:23:43.614389    3676 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737948223613998359,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 03:23:43 no-preload-844432 kubelet[3676]: E0127 03:23:43.614455    3676 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737948223613998359,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 03:23:50 no-preload-844432 kubelet[3676]: I0127 03:23:50.153134    3676 scope.go:117] "RemoveContainer" containerID="ea0ac346951fa4076a2ea94ff34faeb3067d6c5663e1cbd58b6856b241865095"
	Jan 27 03:23:50 no-preload-844432 kubelet[3676]: E0127 03:23:50.153284    3676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-lqzvl_kubernetes-dashboard(8d1470dd-6c76-4fc9-8a99-3d7e9f5dc63c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-lqzvl" podUID="8d1470dd-6c76-4fc9-8a99-3d7e9f5dc63c"
	
	
	==> kubernetes-dashboard [f650dcd2e6a986e77e8e4329d18524fa2c285bae6a3957023cacf1918de89677] <==
	2025/01/27 03:11:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:12:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:12:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:13:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:13:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:14:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:14:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:15:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:15:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:16:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:16:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:17:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:17:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:18:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:18:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:19:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:19:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:20:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:20:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:21:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:21:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:22:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:22:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:23:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:23:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [978d270bf91af3e6ed5c59f85253595cafd862823a852f61e03eab2e0bfc8d54] <==
	I0127 03:03:00.597781       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0127 03:03:00.612624       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0127 03:03:00.612767       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0127 03:03:00.624935       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0127 03:03:00.625091       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-844432_c72e3aa7-ddd8-49dc-a35b-6bf9f57e73a1!
	I0127 03:03:00.626688       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"236b9151-1d45-4b47-a273-013ded9918bb", APIVersion:"v1", ResourceVersion:"394", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-844432_c72e3aa7-ddd8-49dc-a35b-6bf9f57e73a1 became leader
	I0127 03:03:00.741824       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-844432_c72e3aa7-ddd8-49dc-a35b-6bf9f57e73a1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-844432 -n no-preload-844432
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-844432 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-ml7kw
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-844432 describe pod metrics-server-f79f97bbb-ml7kw
E0127 03:23:51.788472  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/auto-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-844432 describe pod metrics-server-f79f97bbb-ml7kw: exit status 1 (63.497912ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-f79f97bbb-ml7kw" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-844432 describe pod metrics-server-f79f97bbb-ml7kw: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (1621.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-542356 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context old-k8s-version-542356 create -f testdata/busybox.yaml: exit status 1 (56.226128ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-542356" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context old-k8s-version-542356 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-542356 -n old-k8s-version-542356
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-542356 -n old-k8s-version-542356: exit status 6 (241.952178ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 02:58:07.585362  947817 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-542356" does not appear in /home/jenkins/minikube-integration/20316-897624/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-542356" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-542356 -n old-k8s-version-542356
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-542356 -n old-k8s-version-542356: exit status 6 (248.267037ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 02:58:07.831526  947846 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-542356" does not appear in /home/jenkins/minikube-integration/20316-897624/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-542356" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (106.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-542356 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-542356 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m46.13937672s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-542356 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-542356 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-542356 describe deploy/metrics-server -n kube-system: exit status 1 (45.833366ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-542356" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-542356 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-542356 -n old-k8s-version-542356
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-542356 -n old-k8s-version-542356: exit status 6 (231.262214ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 02:59:54.249546  948464 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-542356" does not appear in /home/jenkins/minikube-integration/20316-897624/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-542356" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (106.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (508.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-542356 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-542356 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (8m27.297174432s)

                                                
                                                
-- stdout --
	* [old-k8s-version-542356] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20316
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20316-897624/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-897624/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-542356" primary control-plane node in "old-k8s-version-542356" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-542356" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 02:59:56.796847  948597 out.go:345] Setting OutFile to fd 1 ...
	I0127 02:59:56.797009  948597 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:59:56.797020  948597 out.go:358] Setting ErrFile to fd 2...
	I0127 02:59:56.797025  948597 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:59:56.797187  948597 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-897624/.minikube/bin
	I0127 02:59:56.797737  948597 out.go:352] Setting JSON to false
	I0127 02:59:56.798766  948597 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":13340,"bootTime":1737933457,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 02:59:56.798877  948597 start.go:139] virtualization: kvm guest
	I0127 02:59:56.800842  948597 out.go:177] * [old-k8s-version-542356] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 02:59:56.802045  948597 notify.go:220] Checking for updates...
	I0127 02:59:56.802058  948597 out.go:177]   - MINIKUBE_LOCATION=20316
	I0127 02:59:56.803462  948597 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 02:59:56.804544  948597 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20316-897624/kubeconfig
	I0127 02:59:56.805630  948597 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-897624/.minikube
	I0127 02:59:56.806743  948597 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 02:59:56.807898  948597 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 02:59:56.809420  948597 config.go:182] Loaded profile config "old-k8s-version-542356": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 02:59:56.809787  948597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 02:59:56.809840  948597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:59:56.825318  948597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34323
	I0127 02:59:56.825761  948597 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:59:56.826379  948597 main.go:141] libmachine: Using API Version  1
	I0127 02:59:56.826436  948597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:59:56.826781  948597 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:59:56.826986  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .DriverName
	I0127 02:59:56.829021  948597 out.go:177] * Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	I0127 02:59:56.830261  948597 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 02:59:56.830568  948597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 02:59:56.830605  948597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:59:56.845462  948597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34891
	I0127 02:59:56.845863  948597 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:59:56.846306  948597 main.go:141] libmachine: Using API Version  1
	I0127 02:59:56.846326  948597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:59:56.846596  948597 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:59:56.846840  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .DriverName
	I0127 02:59:56.881098  948597 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 02:59:56.882330  948597 start.go:297] selected driver: kvm2
	I0127 02:59:56.882343  948597 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-542356 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-5
42356 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.85 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 02:59:56.882444  948597 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 02:59:56.883106  948597 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 02:59:56.883199  948597 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20316-897624/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 02:59:56.898373  948597 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 02:59:56.898785  948597 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 02:59:56.898821  948597 cni.go:84] Creating CNI manager for ""
	I0127 02:59:56.898876  948597 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 02:59:56.898911  948597 start.go:340] cluster config:
	{Name:old-k8s-version-542356 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-542356 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.85 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 02:59:56.899013  948597 iso.go:125] acquiring lock: {Name:mkbbe08fa3daa2372045069667a0a52e0e34abd9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 02:59:56.900584  948597 out.go:177] * Starting "old-k8s-version-542356" primary control-plane node in "old-k8s-version-542356" cluster
	I0127 02:59:56.902004  948597 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 02:59:56.902046  948597 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0127 02:59:56.902060  948597 cache.go:56] Caching tarball of preloaded images
	I0127 02:59:56.902158  948597 preload.go:172] Found /home/jenkins/minikube-integration/20316-897624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 02:59:56.902171  948597 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0127 02:59:56.902280  948597 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/old-k8s-version-542356/config.json ...
	I0127 02:59:56.902477  948597 start.go:360] acquireMachinesLock for old-k8s-version-542356: {Name:mke71211974c740845fb9b00041df14f4e3ecd74 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 02:59:56.902541  948597 start.go:364] duration metric: took 40.852µs to acquireMachinesLock for "old-k8s-version-542356"
	I0127 02:59:56.902562  948597 start.go:96] Skipping create...Using existing machine configuration
	I0127 02:59:56.902573  948597 fix.go:54] fixHost starting: 
	I0127 02:59:56.902865  948597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 02:59:56.902915  948597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:59:56.917603  948597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40699
	I0127 02:59:56.918079  948597 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:59:56.918589  948597 main.go:141] libmachine: Using API Version  1
	I0127 02:59:56.918608  948597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:59:56.918903  948597 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:59:56.919144  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .DriverName
	I0127 02:59:56.919332  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetState
	I0127 02:59:56.920896  948597 fix.go:112] recreateIfNeeded on old-k8s-version-542356: state=Stopped err=<nil>
	I0127 02:59:56.920932  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .DriverName
	W0127 02:59:56.921093  948597 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 02:59:56.923690  948597 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-542356" ...
	I0127 02:59:56.924865  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .Start
	I0127 02:59:56.925063  948597 main.go:141] libmachine: (old-k8s-version-542356) starting domain...
	I0127 02:59:56.925085  948597 main.go:141] libmachine: (old-k8s-version-542356) ensuring networks are active...
	I0127 02:59:56.925902  948597 main.go:141] libmachine: (old-k8s-version-542356) Ensuring network default is active
	I0127 02:59:56.926348  948597 main.go:141] libmachine: (old-k8s-version-542356) Ensuring network mk-old-k8s-version-542356 is active
	I0127 02:59:56.926686  948597 main.go:141] libmachine: (old-k8s-version-542356) getting domain XML...
	I0127 02:59:56.927314  948597 main.go:141] libmachine: (old-k8s-version-542356) creating domain...
	I0127 02:59:58.158320  948597 main.go:141] libmachine: (old-k8s-version-542356) waiting for IP...
	I0127 02:59:58.159228  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:59:58.159643  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | unable to find current IP address of domain old-k8s-version-542356 in network mk-old-k8s-version-542356
	I0127 02:59:58.159762  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | I0127 02:59:58.159639  948632 retry.go:31] will retry after 208.09324ms: waiting for domain to come up
	I0127 02:59:58.369131  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:59:58.369828  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | unable to find current IP address of domain old-k8s-version-542356 in network mk-old-k8s-version-542356
	I0127 02:59:58.369856  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | I0127 02:59:58.369765  948632 retry.go:31] will retry after 324.365764ms: waiting for domain to come up
	I0127 02:59:58.695367  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:59:58.695870  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | unable to find current IP address of domain old-k8s-version-542356 in network mk-old-k8s-version-542356
	I0127 02:59:58.695956  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | I0127 02:59:58.695849  948632 retry.go:31] will retry after 366.340443ms: waiting for domain to come up
	I0127 02:59:59.063330  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:59:59.063946  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | unable to find current IP address of domain old-k8s-version-542356 in network mk-old-k8s-version-542356
	I0127 02:59:59.063979  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | I0127 02:59:59.063896  948632 retry.go:31] will retry after 467.161558ms: waiting for domain to come up
	I0127 02:59:59.532244  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 02:59:59.532823  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | unable to find current IP address of domain old-k8s-version-542356 in network mk-old-k8s-version-542356
	I0127 02:59:59.532885  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | I0127 02:59:59.532798  948632 retry.go:31] will retry after 523.036617ms: waiting for domain to come up
	I0127 03:00:00.057551  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 03:00:00.058171  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | unable to find current IP address of domain old-k8s-version-542356 in network mk-old-k8s-version-542356
	I0127 03:00:00.058203  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | I0127 03:00:00.058150  948632 retry.go:31] will retry after 650.647775ms: waiting for domain to come up
	I0127 03:00:00.710180  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 03:00:00.710553  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | unable to find current IP address of domain old-k8s-version-542356 in network mk-old-k8s-version-542356
	I0127 03:00:00.710591  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | I0127 03:00:00.710540  948632 retry.go:31] will retry after 755.702478ms: waiting for domain to come up
	I0127 03:00:01.467629  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 03:00:01.468118  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | unable to find current IP address of domain old-k8s-version-542356 in network mk-old-k8s-version-542356
	I0127 03:00:01.468148  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | I0127 03:00:01.468076  948632 retry.go:31] will retry after 1.272804518s: waiting for domain to come up
	I0127 03:00:02.741973  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 03:00:02.742508  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | unable to find current IP address of domain old-k8s-version-542356 in network mk-old-k8s-version-542356
	I0127 03:00:02.742533  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | I0127 03:00:02.742489  948632 retry.go:31] will retry after 1.720305374s: waiting for domain to come up
	I0127 03:00:04.464498  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 03:00:04.465051  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | unable to find current IP address of domain old-k8s-version-542356 in network mk-old-k8s-version-542356
	I0127 03:00:04.465087  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | I0127 03:00:04.465034  948632 retry.go:31] will retry after 1.462522026s: waiting for domain to come up
	I0127 03:00:05.929615  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 03:00:05.930226  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | unable to find current IP address of domain old-k8s-version-542356 in network mk-old-k8s-version-542356
	I0127 03:00:05.930257  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | I0127 03:00:05.930206  948632 retry.go:31] will retry after 2.585178901s: waiting for domain to come up
	I0127 03:00:08.517756  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 03:00:08.518234  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | unable to find current IP address of domain old-k8s-version-542356 in network mk-old-k8s-version-542356
	I0127 03:00:08.518262  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | I0127 03:00:08.518217  948632 retry.go:31] will retry after 3.563234015s: waiting for domain to come up
	I0127 03:00:12.082928  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 03:00:12.083471  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | unable to find current IP address of domain old-k8s-version-542356 in network mk-old-k8s-version-542356
	I0127 03:00:12.083511  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | I0127 03:00:12.083446  948632 retry.go:31] will retry after 3.42838631s: waiting for domain to come up
	I0127 03:00:15.514300  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 03:00:15.514799  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has current primary IP address 192.168.39.85 and MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 03:00:15.514829  948597 main.go:141] libmachine: (old-k8s-version-542356) found domain IP: 192.168.39.85
	I0127 03:00:15.514859  948597 main.go:141] libmachine: (old-k8s-version-542356) reserving static IP address...
	I0127 03:00:15.515310  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | found host DHCP lease matching {name: "old-k8s-version-542356", mac: "52:54:00:12:05:b8", ip: "192.168.39.85"} in network mk-old-k8s-version-542356: {Iface:virbr1 ExpiryTime:2025-01-27 04:00:08 +0000 UTC Type:0 Mac:52:54:00:12:05:b8 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:old-k8s-version-542356 Clientid:01:52:54:00:12:05:b8}
	I0127 03:00:15.515342  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | skip adding static IP to network mk-old-k8s-version-542356 - found existing host DHCP lease matching {name: "old-k8s-version-542356", mac: "52:54:00:12:05:b8", ip: "192.168.39.85"}
	I0127 03:00:15.515367  948597 main.go:141] libmachine: (old-k8s-version-542356) reserved static IP address 192.168.39.85 for domain old-k8s-version-542356
	I0127 03:00:15.515385  948597 main.go:141] libmachine: (old-k8s-version-542356) waiting for SSH...
	I0127 03:00:15.515396  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | Getting to WaitForSSH function...
	I0127 03:00:15.517579  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 03:00:15.517949  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:05:b8", ip: ""} in network mk-old-k8s-version-542356: {Iface:virbr1 ExpiryTime:2025-01-27 04:00:08 +0000 UTC Type:0 Mac:52:54:00:12:05:b8 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:old-k8s-version-542356 Clientid:01:52:54:00:12:05:b8}
	I0127 03:00:15.517970  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined IP address 192.168.39.85 and MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 03:00:15.518107  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | Using SSH client type: external
	I0127 03:00:15.518129  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | Using SSH private key: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/old-k8s-version-542356/id_rsa (-rw-------)
	I0127 03:00:15.518157  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.85 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20316-897624/.minikube/machines/old-k8s-version-542356/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 03:00:15.518167  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | About to run SSH command:
	I0127 03:00:15.518180  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | exit 0
	I0127 03:00:15.649371  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | SSH cmd err, output: <nil>: 
	I0127 03:00:15.649746  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetConfigRaw
	I0127 03:00:15.650417  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetIP
	I0127 03:00:15.653011  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 03:00:15.653442  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:05:b8", ip: ""} in network mk-old-k8s-version-542356: {Iface:virbr1 ExpiryTime:2025-01-27 04:00:08 +0000 UTC Type:0 Mac:52:54:00:12:05:b8 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:old-k8s-version-542356 Clientid:01:52:54:00:12:05:b8}
	I0127 03:00:15.653467  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined IP address 192.168.39.85 and MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 03:00:15.653795  948597 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/old-k8s-version-542356/config.json ...
	I0127 03:00:15.654052  948597 machine.go:93] provisionDockerMachine start ...
	I0127 03:00:15.654078  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .DriverName
	I0127 03:00:15.654297  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHHostname
	I0127 03:00:15.656707  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 03:00:15.657077  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:05:b8", ip: ""} in network mk-old-k8s-version-542356: {Iface:virbr1 ExpiryTime:2025-01-27 04:00:08 +0000 UTC Type:0 Mac:52:54:00:12:05:b8 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:old-k8s-version-542356 Clientid:01:52:54:00:12:05:b8}
	I0127 03:00:15.657107  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined IP address 192.168.39.85 and MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 03:00:15.657296  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHPort
	I0127 03:00:15.657493  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHKeyPath
	I0127 03:00:15.657689  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHKeyPath
	I0127 03:00:15.657838  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHUsername
	I0127 03:00:15.658015  948597 main.go:141] libmachine: Using SSH client type: native
	I0127 03:00:15.658222  948597 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.85 22 <nil> <nil>}
	I0127 03:00:15.658234  948597 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 03:00:15.769221  948597 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 03:00:15.769259  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetMachineName
	I0127 03:00:15.769559  948597 buildroot.go:166] provisioning hostname "old-k8s-version-542356"
	I0127 03:00:15.769583  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetMachineName
	I0127 03:00:15.769793  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHHostname
	I0127 03:00:15.772884  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 03:00:15.773321  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:05:b8", ip: ""} in network mk-old-k8s-version-542356: {Iface:virbr1 ExpiryTime:2025-01-27 04:00:08 +0000 UTC Type:0 Mac:52:54:00:12:05:b8 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:old-k8s-version-542356 Clientid:01:52:54:00:12:05:b8}
	I0127 03:00:15.773358  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined IP address 192.168.39.85 and MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 03:00:15.773483  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHPort
	I0127 03:00:15.773767  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHKeyPath
	I0127 03:00:15.773965  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHKeyPath
	I0127 03:00:15.774105  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHUsername
	I0127 03:00:15.774279  948597 main.go:141] libmachine: Using SSH client type: native
	I0127 03:00:15.774453  948597 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.85 22 <nil> <nil>}
	I0127 03:00:15.774466  948597 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-542356 && echo "old-k8s-version-542356" | sudo tee /etc/hostname
	I0127 03:00:15.908782  948597 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-542356
	
	I0127 03:00:15.908822  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHHostname
	I0127 03:00:15.912438  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 03:00:15.912901  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:05:b8", ip: ""} in network mk-old-k8s-version-542356: {Iface:virbr1 ExpiryTime:2025-01-27 04:00:08 +0000 UTC Type:0 Mac:52:54:00:12:05:b8 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:old-k8s-version-542356 Clientid:01:52:54:00:12:05:b8}
	I0127 03:00:15.912958  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined IP address 192.168.39.85 and MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 03:00:15.913166  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHPort
	I0127 03:00:15.913386  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHKeyPath
	I0127 03:00:15.913602  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHKeyPath
	I0127 03:00:15.913723  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHUsername
	I0127 03:00:15.913892  948597 main.go:141] libmachine: Using SSH client type: native
	I0127 03:00:15.914090  948597 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.85 22 <nil> <nil>}
	I0127 03:00:15.914109  948597 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-542356' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-542356/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-542356' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 03:00:16.043076  948597 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 03:00:16.043123  948597 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20316-897624/.minikube CaCertPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20316-897624/.minikube}
	I0127 03:00:16.043163  948597 buildroot.go:174] setting up certificates
	I0127 03:00:16.043176  948597 provision.go:84] configureAuth start
	I0127 03:00:16.043186  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetMachineName
	I0127 03:00:16.043513  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetIP
	I0127 03:00:16.046743  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 03:00:16.047061  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:05:b8", ip: ""} in network mk-old-k8s-version-542356: {Iface:virbr1 ExpiryTime:2025-01-27 04:00:08 +0000 UTC Type:0 Mac:52:54:00:12:05:b8 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:old-k8s-version-542356 Clientid:01:52:54:00:12:05:b8}
	I0127 03:00:16.047093  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined IP address 192.168.39.85 and MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 03:00:16.047398  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHHostname
	I0127 03:00:16.049836  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 03:00:16.050266  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:05:b8", ip: ""} in network mk-old-k8s-version-542356: {Iface:virbr1 ExpiryTime:2025-01-27 04:00:08 +0000 UTC Type:0 Mac:52:54:00:12:05:b8 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:old-k8s-version-542356 Clientid:01:52:54:00:12:05:b8}
	I0127 03:00:16.050302  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined IP address 192.168.39.85 and MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 03:00:16.050521  948597 provision.go:143] copyHostCerts
	I0127 03:00:16.050600  948597 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-897624/.minikube/ca.pem, removing ...
	I0127 03:00:16.050621  948597 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-897624/.minikube/ca.pem
	I0127 03:00:16.050686  948597 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20316-897624/.minikube/ca.pem (1078 bytes)
	I0127 03:00:16.050790  948597 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-897624/.minikube/cert.pem, removing ...
	I0127 03:00:16.050798  948597 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-897624/.minikube/cert.pem
	I0127 03:00:16.050825  948597 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20316-897624/.minikube/cert.pem (1123 bytes)
	I0127 03:00:16.050895  948597 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-897624/.minikube/key.pem, removing ...
	I0127 03:00:16.050902  948597 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-897624/.minikube/key.pem
	I0127 03:00:16.050927  948597 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20316-897624/.minikube/key.pem (1679 bytes)
	I0127 03:00:16.050990  948597 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-542356 san=[127.0.0.1 192.168.39.85 localhost minikube old-k8s-version-542356]
	I0127 03:00:16.109299  948597 provision.go:177] copyRemoteCerts
	I0127 03:00:16.109378  948597 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 03:00:16.109416  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHHostname
	I0127 03:00:16.112348  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 03:00:16.112717  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:05:b8", ip: ""} in network mk-old-k8s-version-542356: {Iface:virbr1 ExpiryTime:2025-01-27 04:00:08 +0000 UTC Type:0 Mac:52:54:00:12:05:b8 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:old-k8s-version-542356 Clientid:01:52:54:00:12:05:b8}
	I0127 03:00:16.112758  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined IP address 192.168.39.85 and MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 03:00:16.112912  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHPort
	I0127 03:00:16.113146  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHKeyPath
	I0127 03:00:16.113339  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHUsername
	I0127 03:00:16.113508  948597 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/old-k8s-version-542356/id_rsa Username:docker}
	I0127 03:00:16.204392  948597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 03:00:16.232383  948597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0127 03:00:16.257262  948597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 03:00:16.280888  948597 provision.go:87] duration metric: took 237.695082ms to configureAuth
	I0127 03:00:16.280948  948597 buildroot.go:189] setting minikube options for container-runtime
	I0127 03:00:16.281202  948597 config.go:182] Loaded profile config "old-k8s-version-542356": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 03:00:16.281303  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHHostname
	I0127 03:00:16.284304  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 03:00:16.284687  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:05:b8", ip: ""} in network mk-old-k8s-version-542356: {Iface:virbr1 ExpiryTime:2025-01-27 04:00:08 +0000 UTC Type:0 Mac:52:54:00:12:05:b8 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:old-k8s-version-542356 Clientid:01:52:54:00:12:05:b8}
	I0127 03:00:16.284711  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined IP address 192.168.39.85 and MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 03:00:16.284917  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHPort
	I0127 03:00:16.285138  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHKeyPath
	I0127 03:00:16.285313  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHKeyPath
	I0127 03:00:16.285490  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHUsername
	I0127 03:00:16.285657  948597 main.go:141] libmachine: Using SSH client type: native
	I0127 03:00:16.285918  948597 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.85 22 <nil> <nil>}
	I0127 03:00:16.285936  948597 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 03:00:16.518470  948597 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 03:00:16.518498  948597 machine.go:96] duration metric: took 864.430012ms to provisionDockerMachine
	I0127 03:00:16.518510  948597 start.go:293] postStartSetup for "old-k8s-version-542356" (driver="kvm2")
	I0127 03:00:16.518521  948597 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 03:00:16.518542  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .DriverName
	I0127 03:00:16.518917  948597 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 03:00:16.518957  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHHostname
	I0127 03:00:16.521617  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 03:00:16.522117  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:05:b8", ip: ""} in network mk-old-k8s-version-542356: {Iface:virbr1 ExpiryTime:2025-01-27 04:00:08 +0000 UTC Type:0 Mac:52:54:00:12:05:b8 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:old-k8s-version-542356 Clientid:01:52:54:00:12:05:b8}
	I0127 03:00:16.522152  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined IP address 192.168.39.85 and MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 03:00:16.522361  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHPort
	I0127 03:00:16.522632  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHKeyPath
	I0127 03:00:16.522820  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHUsername
	I0127 03:00:16.522963  948597 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/old-k8s-version-542356/id_rsa Username:docker}
	I0127 03:00:16.607874  948597 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 03:00:16.612353  948597 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 03:00:16.612393  948597 filesync.go:126] Scanning /home/jenkins/minikube-integration/20316-897624/.minikube/addons for local assets ...
	I0127 03:00:16.612467  948597 filesync.go:126] Scanning /home/jenkins/minikube-integration/20316-897624/.minikube/files for local assets ...
	I0127 03:00:16.612561  948597 filesync.go:149] local asset: /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem -> 9048892.pem in /etc/ssl/certs
	I0127 03:00:16.612707  948597 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 03:00:16.622575  948597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem --> /etc/ssl/certs/9048892.pem (1708 bytes)
	I0127 03:00:16.646558  948597 start.go:296] duration metric: took 128.02877ms for postStartSetup
	I0127 03:00:16.646604  948597 fix.go:56] duration metric: took 19.744032877s for fixHost
	I0127 03:00:16.646626  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHHostname
	I0127 03:00:16.649739  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 03:00:16.650102  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:05:b8", ip: ""} in network mk-old-k8s-version-542356: {Iface:virbr1 ExpiryTime:2025-01-27 04:00:08 +0000 UTC Type:0 Mac:52:54:00:12:05:b8 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:old-k8s-version-542356 Clientid:01:52:54:00:12:05:b8}
	I0127 03:00:16.650147  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined IP address 192.168.39.85 and MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 03:00:16.650334  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHPort
	I0127 03:00:16.650562  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHKeyPath
	I0127 03:00:16.650701  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHKeyPath
	I0127 03:00:16.650873  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHUsername
	I0127 03:00:16.651067  948597 main.go:141] libmachine: Using SSH client type: native
	I0127 03:00:16.651301  948597 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.85 22 <nil> <nil>}
	I0127 03:00:16.651312  948597 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 03:00:16.761310  948597 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737946816.719752815
	
	I0127 03:00:16.761342  948597 fix.go:216] guest clock: 1737946816.719752815
	I0127 03:00:16.761353  948597 fix.go:229] Guest: 2025-01-27 03:00:16.719752815 +0000 UTC Remote: 2025-01-27 03:00:16.646608452 +0000 UTC m=+19.888144241 (delta=73.144363ms)
	I0127 03:00:16.761399  948597 fix.go:200] guest clock delta is within tolerance: 73.144363ms
	I0127 03:00:16.761424  948597 start.go:83] releasing machines lock for "old-k8s-version-542356", held for 19.85887082s
	I0127 03:00:16.761459  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .DriverName
	I0127 03:00:16.761739  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetIP
	I0127 03:00:16.764330  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 03:00:16.764653  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:05:b8", ip: ""} in network mk-old-k8s-version-542356: {Iface:virbr1 ExpiryTime:2025-01-27 04:00:08 +0000 UTC Type:0 Mac:52:54:00:12:05:b8 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:old-k8s-version-542356 Clientid:01:52:54:00:12:05:b8}
	I0127 03:00:16.764696  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined IP address 192.168.39.85 and MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 03:00:16.764826  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .DriverName
	I0127 03:00:16.765355  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .DriverName
	I0127 03:00:16.765528  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .DriverName
	I0127 03:00:16.765631  948597 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 03:00:16.765674  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHHostname
	I0127 03:00:16.765787  948597 ssh_runner.go:195] Run: cat /version.json
	I0127 03:00:16.765830  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHHostname
	I0127 03:00:16.768095  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 03:00:16.768412  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:05:b8", ip: ""} in network mk-old-k8s-version-542356: {Iface:virbr1 ExpiryTime:2025-01-27 04:00:08 +0000 UTC Type:0 Mac:52:54:00:12:05:b8 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:old-k8s-version-542356 Clientid:01:52:54:00:12:05:b8}
	I0127 03:00:16.768442  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined IP address 192.168.39.85 and MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 03:00:16.768477  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 03:00:16.768535  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHPort
	I0127 03:00:16.768735  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHKeyPath
	I0127 03:00:16.768798  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:05:b8", ip: ""} in network mk-old-k8s-version-542356: {Iface:virbr1 ExpiryTime:2025-01-27 04:00:08 +0000 UTC Type:0 Mac:52:54:00:12:05:b8 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:old-k8s-version-542356 Clientid:01:52:54:00:12:05:b8}
	I0127 03:00:16.768829  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined IP address 192.168.39.85 and MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 03:00:16.768884  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHUsername
	I0127 03:00:16.769041  948597 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/old-k8s-version-542356/id_rsa Username:docker}
	I0127 03:00:16.769108  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHPort
	I0127 03:00:16.769275  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHKeyPath
	I0127 03:00:16.769415  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetSSHUsername
	I0127 03:00:16.769533  948597 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/old-k8s-version-542356/id_rsa Username:docker}
	I0127 03:00:16.883849  948597 ssh_runner.go:195] Run: systemctl --version
	I0127 03:00:16.889649  948597 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 03:00:17.029226  948597 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 03:00:17.034717  948597 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 03:00:17.034832  948597 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 03:00:17.050576  948597 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 03:00:17.050600  948597 start.go:495] detecting cgroup driver to use...
	I0127 03:00:17.050659  948597 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 03:00:17.066244  948597 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 03:00:17.081861  948597 docker.go:217] disabling cri-docker service (if available) ...
	I0127 03:00:17.081934  948597 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 03:00:17.095906  948597 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 03:00:17.109997  948597 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 03:00:17.229034  948597 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 03:00:17.387771  948597 docker.go:233] disabling docker service ...
	I0127 03:00:17.387881  948597 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 03:00:17.401823  948597 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 03:00:17.414949  948597 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 03:00:17.534202  948597 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 03:00:17.655897  948597 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 03:00:17.670858  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 03:00:17.690255  948597 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0127 03:00:17.690329  948597 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:00:17.700673  948597 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 03:00:17.700738  948597 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:00:17.711153  948597 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:00:17.721173  948597 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:00:17.731741  948597 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 03:00:17.742355  948597 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 03:00:17.751074  948597 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 03:00:17.751150  948597 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 03:00:17.763927  948597 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 03:00:17.772980  948597 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 03:00:17.884430  948597 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 03:00:17.973476  948597 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 03:00:17.973558  948597 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 03:00:17.978269  948597 start.go:563] Will wait 60s for crictl version
	I0127 03:00:17.978324  948597 ssh_runner.go:195] Run: which crictl
	I0127 03:00:17.982749  948597 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 03:00:18.024744  948597 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 03:00:18.024855  948597 ssh_runner.go:195] Run: crio --version
	I0127 03:00:18.052215  948597 ssh_runner.go:195] Run: crio --version
	I0127 03:00:18.082503  948597 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0127 03:00:18.083836  948597 main.go:141] libmachine: (old-k8s-version-542356) Calling .GetIP
	I0127 03:00:18.086565  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 03:00:18.086992  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:05:b8", ip: ""} in network mk-old-k8s-version-542356: {Iface:virbr1 ExpiryTime:2025-01-27 04:00:08 +0000 UTC Type:0 Mac:52:54:00:12:05:b8 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:old-k8s-version-542356 Clientid:01:52:54:00:12:05:b8}
	I0127 03:00:18.087024  948597 main.go:141] libmachine: (old-k8s-version-542356) DBG | domain old-k8s-version-542356 has defined IP address 192.168.39.85 and MAC address 52:54:00:12:05:b8 in network mk-old-k8s-version-542356
	I0127 03:00:18.087246  948597 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0127 03:00:18.091332  948597 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 03:00:18.104174  948597 kubeadm.go:883] updating cluster {Name:old-k8s-version-542356 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-542356 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.85 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 03:00:18.104344  948597 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 03:00:18.104424  948597 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 03:00:18.155389  948597 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 03:00:18.155461  948597 ssh_runner.go:195] Run: which lz4
	I0127 03:00:18.159445  948597 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 03:00:18.163529  948597 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 03:00:18.163572  948597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0127 03:00:19.695223  948597 crio.go:462] duration metric: took 1.535800737s to copy over tarball
	I0127 03:00:19.695311  948597 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 03:00:22.512643  948597 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.81730373s)
	I0127 03:00:22.512681  948597 crio.go:469] duration metric: took 2.817421615s to extract the tarball
	I0127 03:00:22.512692  948597 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 03:00:22.556265  948597 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 03:00:22.591772  948597 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 03:00:22.591813  948597 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0127 03:00:22.591917  948597 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 03:00:22.591946  948597 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 03:00:22.591954  948597 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0127 03:00:22.591959  948597 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0127 03:00:22.592007  948597 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0127 03:00:22.591925  948597 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 03:00:22.592106  948597 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 03:00:22.591917  948597 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 03:00:22.593833  948597 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 03:00:22.593914  948597 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0127 03:00:22.593951  948597 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0127 03:00:22.593945  948597 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 03:00:22.594012  948597 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 03:00:22.594089  948597 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0127 03:00:22.594032  948597 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 03:00:22.594431  948597 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 03:00:22.841573  948597 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0127 03:00:22.842310  948597 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0127 03:00:22.849139  948597 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 03:00:22.857700  948597 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0127 03:00:22.870177  948597 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0127 03:00:22.884547  948597 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0127 03:00:22.898637  948597 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0127 03:00:22.919490  948597 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0127 03:00:22.919564  948597 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 03:00:22.919624  948597 ssh_runner.go:195] Run: which crictl
	I0127 03:00:22.937920  948597 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0127 03:00:22.937980  948597 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0127 03:00:22.938034  948597 ssh_runner.go:195] Run: which crictl
	I0127 03:00:22.997194  948597 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0127 03:00:22.997242  948597 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 03:00:22.997251  948597 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0127 03:00:22.997288  948597 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 03:00:22.997300  948597 ssh_runner.go:195] Run: which crictl
	I0127 03:00:22.997339  948597 ssh_runner.go:195] Run: which crictl
	I0127 03:00:22.998722  948597 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0127 03:00:22.998753  948597 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0127 03:00:22.998788  948597 ssh_runner.go:195] Run: which crictl
	I0127 03:00:23.024143  948597 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0127 03:00:23.024200  948597 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 03:00:23.024256  948597 ssh_runner.go:195] Run: which crictl
	I0127 03:00:23.025786  948597 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0127 03:00:23.025830  948597 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0127 03:00:23.025834  948597 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 03:00:23.025850  948597 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 03:00:23.025856  948597 ssh_runner.go:195] Run: which crictl
	I0127 03:00:23.025799  948597 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 03:00:23.025931  948597 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 03:00:23.025964  948597 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 03:00:23.028529  948597 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 03:00:23.157181  948597 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 03:00:23.157278  948597 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 03:00:23.159710  948597 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 03:00:23.159759  948597 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 03:00:23.159826  948597 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 03:00:23.159858  948597 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 03:00:23.159956  948597 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 03:00:23.294584  948597 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 03:00:23.294584  948597 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 03:00:23.321372  948597 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 03:00:23.321392  948597 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 03:00:23.321469  948597 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 03:00:23.329857  948597 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 03:00:23.329857  948597 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 03:00:23.409602  948597 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0127 03:00:23.465995  948597 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0127 03:00:23.466004  948597 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0127 03:00:23.466056  948597 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0127 03:00:23.470264  948597 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 03:00:23.485228  948597 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0127 03:00:23.485326  948597 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0127 03:00:23.516476  948597 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0127 03:00:23.779585  948597 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 03:00:23.922410  948597 cache_images.go:92] duration metric: took 1.330577236s to LoadCachedImages
	W0127 03:00:23.922535  948597 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20316-897624/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0127 03:00:23.922555  948597 kubeadm.go:934] updating node { 192.168.39.85 8443 v1.20.0 crio true true} ...
	I0127 03:00:23.922687  948597 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-542356 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.85
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-542356 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 03:00:23.922774  948597 ssh_runner.go:195] Run: crio config
	I0127 03:00:23.973504  948597 cni.go:84] Creating CNI manager for ""
	I0127 03:00:23.973541  948597 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 03:00:23.973557  948597 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 03:00:23.973585  948597 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.85 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-542356 NodeName:old-k8s-version-542356 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.85"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.85 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0127 03:00:23.973782  948597 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.85
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-542356"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.85
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.85"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 03:00:23.973868  948597 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0127 03:00:23.983898  948597 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 03:00:23.983982  948597 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 03:00:23.993519  948597 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0127 03:00:24.011333  948597 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 03:00:24.027909  948597 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0127 03:00:24.045827  948597 ssh_runner.go:195] Run: grep 192.168.39.85	control-plane.minikube.internal$ /etc/hosts
	I0127 03:00:24.049631  948597 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.85	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 03:00:24.064340  948597 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 03:00:24.199570  948597 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 03:00:24.220244  948597 certs.go:68] Setting up /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/old-k8s-version-542356 for IP: 192.168.39.85
	I0127 03:00:24.220270  948597 certs.go:194] generating shared ca certs ...
	I0127 03:00:24.220294  948597 certs.go:226] acquiring lock for ca certs: {Name:mk2a0a86c4d196cb3cdcd1c836209954479044e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:00:24.220470  948597 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20316-897624/.minikube/ca.key
	I0127 03:00:24.220526  948597 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20316-897624/.minikube/proxy-client-ca.key
	I0127 03:00:24.220541  948597 certs.go:256] generating profile certs ...
	I0127 03:00:24.220686  948597 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/old-k8s-version-542356/client.key
	I0127 03:00:24.220762  948597 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/old-k8s-version-542356/apiserver.key.4fcae880
	I0127 03:00:24.220815  948597 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/old-k8s-version-542356/proxy-client.key
	I0127 03:00:24.220997  948597 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/904889.pem (1338 bytes)
	W0127 03:00:24.221044  948597 certs.go:480] ignoring /home/jenkins/minikube-integration/20316-897624/.minikube/certs/904889_empty.pem, impossibly tiny 0 bytes
	I0127 03:00:24.221061  948597 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 03:00:24.221102  948597 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem (1078 bytes)
	I0127 03:00:24.221201  948597 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/cert.pem (1123 bytes)
	I0127 03:00:24.221244  948597 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/key.pem (1679 bytes)
	I0127 03:00:24.221299  948597 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem (1708 bytes)
	I0127 03:00:24.221965  948597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 03:00:24.263347  948597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 03:00:24.307288  948597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 03:00:24.339910  948597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 03:00:24.380524  948597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/old-k8s-version-542356/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0127 03:00:24.411202  948597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/old-k8s-version-542356/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 03:00:24.437224  948597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/old-k8s-version-542356/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 03:00:24.466497  948597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/old-k8s-version-542356/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 03:00:24.494929  948597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/certs/904889.pem --> /usr/share/ca-certificates/904889.pem (1338 bytes)
	I0127 03:00:24.518343  948597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem --> /usr/share/ca-certificates/9048892.pem (1708 bytes)
	I0127 03:00:24.541295  948597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 03:00:24.563962  948597 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 03:00:24.580179  948597 ssh_runner.go:195] Run: openssl version
	I0127 03:00:24.586070  948597 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 03:00:24.596354  948597 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 03:00:24.600604  948597 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 01:48 /usr/share/ca-certificates/minikubeCA.pem
	I0127 03:00:24.600657  948597 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 03:00:24.606219  948597 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 03:00:24.616273  948597 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/904889.pem && ln -fs /usr/share/ca-certificates/904889.pem /etc/ssl/certs/904889.pem"
	I0127 03:00:24.626484  948597 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/904889.pem
	I0127 03:00:24.630824  948597 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 01:57 /usr/share/ca-certificates/904889.pem
	I0127 03:00:24.630898  948597 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/904889.pem
	I0127 03:00:24.636220  948597 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/904889.pem /etc/ssl/certs/51391683.0"
	I0127 03:00:24.646145  948597 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9048892.pem && ln -fs /usr/share/ca-certificates/9048892.pem /etc/ssl/certs/9048892.pem"
	I0127 03:00:24.656066  948597 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9048892.pem
	I0127 03:00:24.660318  948597 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 01:57 /usr/share/ca-certificates/9048892.pem
	I0127 03:00:24.660376  948597 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9048892.pem
	I0127 03:00:24.665930  948597 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9048892.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 03:00:24.676444  948597 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 03:00:24.681083  948597 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 03:00:24.687201  948597 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 03:00:24.692847  948597 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 03:00:24.698760  948597 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 03:00:24.704129  948597 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 03:00:24.709686  948597 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 03:00:24.716301  948597 kubeadm.go:392] StartCluster: {Name:old-k8s-version-542356 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-542356 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.85 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 03:00:24.716399  948597 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 03:00:24.716474  948597 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 03:00:24.754267  948597 cri.go:89] found id: ""
	I0127 03:00:24.754365  948597 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 03:00:24.764789  948597 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 03:00:24.764811  948597 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 03:00:24.764856  948597 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 03:00:24.774708  948597 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 03:00:24.775693  948597 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-542356" does not appear in /home/jenkins/minikube-integration/20316-897624/kubeconfig
	I0127 03:00:24.776306  948597 kubeconfig.go:62] /home/jenkins/minikube-integration/20316-897624/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-542356" cluster setting kubeconfig missing "old-k8s-version-542356" context setting]
	I0127 03:00:24.777240  948597 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/kubeconfig: {Name:mkcd87672341028e989830590061bc5270733978 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:00:24.866736  948597 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 03:00:24.877417  948597 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.85
	I0127 03:00:24.877458  948597 kubeadm.go:1160] stopping kube-system containers ...
	I0127 03:00:24.877473  948597 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0127 03:00:24.877537  948597 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 03:00:24.915422  948597 cri.go:89] found id: ""
	I0127 03:00:24.915540  948597 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 03:00:24.933153  948597 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 03:00:24.943092  948597 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 03:00:24.943121  948597 kubeadm.go:157] found existing configuration files:
	
	I0127 03:00:24.943181  948597 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 03:00:24.952442  948597 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 03:00:24.952511  948597 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 03:00:24.962140  948597 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 03:00:24.971460  948597 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 03:00:24.971527  948597 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 03:00:24.980789  948597 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 03:00:24.989821  948597 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 03:00:24.989893  948597 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 03:00:24.999453  948597 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 03:00:25.008850  948597 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 03:00:25.008956  948597 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 03:00:25.019154  948597 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 03:00:25.028518  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 03:00:25.188252  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 03:00:26.182935  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 03:00:26.396293  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 03:00:26.498869  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 03:00:26.604975  948597 api_server.go:52] waiting for apiserver process to appear ...
	I0127 03:00:26.605082  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:27.105634  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:27.605240  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:28.105765  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:28.605854  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:29.105727  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:29.605946  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:30.106128  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:30.606095  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:31.106249  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:31.606142  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:32.106026  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:32.605181  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:33.105235  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:33.605943  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:34.105616  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:34.606159  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:35.106136  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:35.606136  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:36.105251  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:36.605942  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:37.106036  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:37.605247  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:38.105316  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:38.606212  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:39.105298  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:39.605606  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:40.105747  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:40.605233  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:41.106174  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:41.605307  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:42.106180  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:42.606170  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:43.106195  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:43.605695  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:44.106048  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:44.605891  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:45.105989  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:45.605304  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:46.105893  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:46.605736  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:47.105988  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:47.606150  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:48.105821  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:48.605266  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:49.106192  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:49.605643  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:50.106158  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:50.605277  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:51.105350  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:51.606123  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:52.105281  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:52.605999  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:53.105957  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:53.605320  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:54.105280  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:54.605623  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:55.105232  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:55.606073  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:56.105853  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:56.606115  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:57.105989  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:57.605175  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:58.106055  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:58.605798  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:59.105812  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:00:59.606143  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:00.105559  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:00.605234  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:01.105701  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:01.605163  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:02.105793  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:02.606184  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:03.105904  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:03.605445  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:04.106077  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:04.606229  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:05.105417  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:05.606061  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:06.106060  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:06.606049  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:07.105309  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:07.605217  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:08.105723  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:08.605621  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:09.106178  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:09.606084  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:10.105183  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:10.606188  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:11.105282  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:11.605971  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:12.105696  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:12.605432  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:13.106231  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:13.606040  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:14.105980  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:14.605924  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:15.105984  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:15.606247  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:16.105236  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:16.605248  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:17.105518  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:17.606164  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:18.106222  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:18.605298  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:19.106118  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:19.605959  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:20.106167  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:20.606146  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:21.105936  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:21.606155  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:22.105884  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:22.605380  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:23.106160  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:23.606158  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:24.105161  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:24.606188  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:25.105579  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:25.605228  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:26.106135  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:26.605709  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:01:26.605812  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:01:26.656718  948597 cri.go:89] found id: ""
	I0127 03:01:26.656752  948597 logs.go:282] 0 containers: []
	W0127 03:01:26.656764  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:01:26.656774  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:01:26.656857  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:01:26.702459  948597 cri.go:89] found id: ""
	I0127 03:01:26.702493  948597 logs.go:282] 0 containers: []
	W0127 03:01:26.702506  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:01:26.702516  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:01:26.702610  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:01:26.752122  948597 cri.go:89] found id: ""
	I0127 03:01:26.752158  948597 logs.go:282] 0 containers: []
	W0127 03:01:26.752170  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:01:26.752178  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:01:26.752243  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:01:26.793710  948597 cri.go:89] found id: ""
	I0127 03:01:26.793745  948597 logs.go:282] 0 containers: []
	W0127 03:01:26.793757  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:01:26.793765  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:01:26.793831  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:01:26.839972  948597 cri.go:89] found id: ""
	I0127 03:01:26.840011  948597 logs.go:282] 0 containers: []
	W0127 03:01:26.840023  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:01:26.840030  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:01:26.840105  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:01:26.882134  948597 cri.go:89] found id: ""
	I0127 03:01:26.882190  948597 logs.go:282] 0 containers: []
	W0127 03:01:26.882204  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:01:26.882212  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:01:26.882285  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:01:26.927229  948597 cri.go:89] found id: ""
	I0127 03:01:26.927265  948597 logs.go:282] 0 containers: []
	W0127 03:01:26.927278  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:01:26.927287  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:01:26.927365  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:01:26.970472  948597 cri.go:89] found id: ""
	I0127 03:01:26.970508  948597 logs.go:282] 0 containers: []
	W0127 03:01:26.970521  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:01:26.970535  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:01:26.970552  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:01:27.038341  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:01:27.038375  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:01:27.056989  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:01:27.057027  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:01:27.251883  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:01:27.251913  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:01:27.251931  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:01:27.338605  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:01:27.338645  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:01:29.883659  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:29.899963  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:01:29.900074  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:01:29.946862  948597 cri.go:89] found id: ""
	I0127 03:01:29.946890  948597 logs.go:282] 0 containers: []
	W0127 03:01:29.946900  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:01:29.946909  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:01:29.946962  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:01:29.988020  948597 cri.go:89] found id: ""
	I0127 03:01:29.988063  948597 logs.go:282] 0 containers: []
	W0127 03:01:29.988075  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:01:29.988083  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:01:29.988148  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:01:30.029188  948597 cri.go:89] found id: ""
	I0127 03:01:30.029217  948597 logs.go:282] 0 containers: []
	W0127 03:01:30.029228  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:01:30.029236  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:01:30.029323  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:01:30.078544  948597 cri.go:89] found id: ""
	I0127 03:01:30.078578  948597 logs.go:282] 0 containers: []
	W0127 03:01:30.078588  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:01:30.078597  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:01:30.078659  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:01:30.119963  948597 cri.go:89] found id: ""
	I0127 03:01:30.119999  948597 logs.go:282] 0 containers: []
	W0127 03:01:30.120067  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:01:30.120085  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:01:30.120182  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:01:30.158221  948597 cri.go:89] found id: ""
	I0127 03:01:30.158256  948597 logs.go:282] 0 containers: []
	W0127 03:01:30.158269  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:01:30.158277  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:01:30.158345  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:01:30.193422  948597 cri.go:89] found id: ""
	I0127 03:01:30.193465  948597 logs.go:282] 0 containers: []
	W0127 03:01:30.193476  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:01:30.193484  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:01:30.193549  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:01:30.239030  948597 cri.go:89] found id: ""
	I0127 03:01:30.239065  948597 logs.go:282] 0 containers: []
	W0127 03:01:30.239076  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:01:30.239090  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:01:30.239105  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:01:30.296486  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:01:30.296527  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:01:30.317398  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:01:30.317431  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:01:30.430177  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:01:30.430213  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:01:30.430233  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:01:30.514902  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:01:30.514955  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:01:33.056194  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:33.074196  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:01:33.074272  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:01:33.119152  948597 cri.go:89] found id: ""
	I0127 03:01:33.119190  948597 logs.go:282] 0 containers: []
	W0127 03:01:33.119202  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:01:33.119211  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:01:33.119281  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:01:33.165100  948597 cri.go:89] found id: ""
	I0127 03:01:33.165137  948597 logs.go:282] 0 containers: []
	W0127 03:01:33.165150  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:01:33.165159  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:01:33.165253  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:01:33.205774  948597 cri.go:89] found id: ""
	I0127 03:01:33.205826  948597 logs.go:282] 0 containers: []
	W0127 03:01:33.205840  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:01:33.205851  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:01:33.205935  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:01:33.253573  948597 cri.go:89] found id: ""
	I0127 03:01:33.253607  948597 logs.go:282] 0 containers: []
	W0127 03:01:33.253618  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:01:33.253627  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:01:33.253695  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:01:33.299536  948597 cri.go:89] found id: ""
	I0127 03:01:33.299573  948597 logs.go:282] 0 containers: []
	W0127 03:01:33.299585  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:01:33.299592  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:01:33.299661  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:01:33.344784  948597 cri.go:89] found id: ""
	I0127 03:01:33.344820  948597 logs.go:282] 0 containers: []
	W0127 03:01:33.344831  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:01:33.344840  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:01:33.344908  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:01:33.391564  948597 cri.go:89] found id: ""
	I0127 03:01:33.391600  948597 logs.go:282] 0 containers: []
	W0127 03:01:33.391611  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:01:33.391620  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:01:33.391714  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:01:33.441344  948597 cri.go:89] found id: ""
	I0127 03:01:33.441377  948597 logs.go:282] 0 containers: []
	W0127 03:01:33.441388  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:01:33.441401  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:01:33.441415  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:01:33.516970  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:01:33.517022  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:01:33.535279  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:01:33.535313  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:01:33.617985  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:01:33.618013  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:01:33.618032  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:01:33.715673  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:01:33.715739  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:01:36.260552  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:36.279190  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:01:36.279290  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:01:36.337183  948597 cri.go:89] found id: ""
	I0127 03:01:36.337220  948597 logs.go:282] 0 containers: []
	W0127 03:01:36.337232  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:01:36.337241  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:01:36.337310  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:01:36.384558  948597 cri.go:89] found id: ""
	I0127 03:01:36.384596  948597 logs.go:282] 0 containers: []
	W0127 03:01:36.384608  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:01:36.384617  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:01:36.384686  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:01:36.439591  948597 cri.go:89] found id: ""
	I0127 03:01:36.439622  948597 logs.go:282] 0 containers: []
	W0127 03:01:36.439633  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:01:36.439642  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:01:36.439713  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:01:36.484358  948597 cri.go:89] found id: ""
	I0127 03:01:36.484395  948597 logs.go:282] 0 containers: []
	W0127 03:01:36.484412  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:01:36.484420  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:01:36.484496  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:01:36.527632  948597 cri.go:89] found id: ""
	I0127 03:01:36.527665  948597 logs.go:282] 0 containers: []
	W0127 03:01:36.527676  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:01:36.527684  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:01:36.527750  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:01:36.568669  948597 cri.go:89] found id: ""
	I0127 03:01:36.568707  948597 logs.go:282] 0 containers: []
	W0127 03:01:36.568720  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:01:36.568729  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:01:36.568801  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:01:36.605428  948597 cri.go:89] found id: ""
	I0127 03:01:36.605459  948597 logs.go:282] 0 containers: []
	W0127 03:01:36.605468  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:01:36.605478  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:01:36.605550  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:01:36.645714  948597 cri.go:89] found id: ""
	I0127 03:01:36.645745  948597 logs.go:282] 0 containers: []
	W0127 03:01:36.645754  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:01:36.645766  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:01:36.645781  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:01:36.731365  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:01:36.731403  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:01:36.731419  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:01:36.814212  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:01:36.814254  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:01:36.856194  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:01:36.856233  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:01:36.916349  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:01:36.916381  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:01:39.436532  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:39.449140  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:01:39.449210  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:01:39.481787  948597 cri.go:89] found id: ""
	I0127 03:01:39.481818  948597 logs.go:282] 0 containers: []
	W0127 03:01:39.481827  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:01:39.481833  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:01:39.481914  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:01:39.518592  948597 cri.go:89] found id: ""
	I0127 03:01:39.518621  948597 logs.go:282] 0 containers: []
	W0127 03:01:39.518630  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:01:39.518636  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:01:39.518689  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:01:39.553944  948597 cri.go:89] found id: ""
	I0127 03:01:39.553981  948597 logs.go:282] 0 containers: []
	W0127 03:01:39.553991  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:01:39.553998  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:01:39.554065  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:01:39.592879  948597 cri.go:89] found id: ""
	I0127 03:01:39.592910  948597 logs.go:282] 0 containers: []
	W0127 03:01:39.592941  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:01:39.592951  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:01:39.593019  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:01:39.627918  948597 cri.go:89] found id: ""
	I0127 03:01:39.627957  948597 logs.go:282] 0 containers: []
	W0127 03:01:39.627969  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:01:39.627977  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:01:39.628048  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:01:39.672283  948597 cri.go:89] found id: ""
	I0127 03:01:39.672314  948597 logs.go:282] 0 containers: []
	W0127 03:01:39.672326  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:01:39.672334  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:01:39.672402  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:01:39.717676  948597 cri.go:89] found id: ""
	I0127 03:01:39.717715  948597 logs.go:282] 0 containers: []
	W0127 03:01:39.717729  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:01:39.717738  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:01:39.717816  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:01:39.769531  948597 cri.go:89] found id: ""
	I0127 03:01:39.769562  948597 logs.go:282] 0 containers: []
	W0127 03:01:39.769570  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:01:39.769580  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:01:39.769592  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:01:39.824255  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:01:39.824308  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:01:39.839595  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:01:39.839637  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:01:39.934427  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:01:39.934459  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:01:39.934475  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:01:40.029244  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:01:40.029287  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:01:42.569345  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:42.581864  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:01:42.581947  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:01:42.615021  948597 cri.go:89] found id: ""
	I0127 03:01:42.615051  948597 logs.go:282] 0 containers: []
	W0127 03:01:42.615059  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:01:42.615065  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:01:42.615142  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:01:42.648856  948597 cri.go:89] found id: ""
	I0127 03:01:42.648889  948597 logs.go:282] 0 containers: []
	W0127 03:01:42.648897  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:01:42.648903  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:01:42.648979  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:01:42.680794  948597 cri.go:89] found id: ""
	I0127 03:01:42.680822  948597 logs.go:282] 0 containers: []
	W0127 03:01:42.680831  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:01:42.680838  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:01:42.680916  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:01:42.713381  948597 cri.go:89] found id: ""
	I0127 03:01:42.713421  948597 logs.go:282] 0 containers: []
	W0127 03:01:42.713433  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:01:42.713441  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:01:42.713511  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:01:42.746982  948597 cri.go:89] found id: ""
	I0127 03:01:42.747009  948597 logs.go:282] 0 containers: []
	W0127 03:01:42.747020  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:01:42.747026  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:01:42.747096  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:01:42.781132  948597 cri.go:89] found id: ""
	I0127 03:01:42.781161  948597 logs.go:282] 0 containers: []
	W0127 03:01:42.781169  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:01:42.781175  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:01:42.781227  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:01:42.814006  948597 cri.go:89] found id: ""
	I0127 03:01:42.814054  948597 logs.go:282] 0 containers: []
	W0127 03:01:42.814070  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:01:42.814078  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:01:42.814148  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:01:42.846896  948597 cri.go:89] found id: ""
	I0127 03:01:42.846924  948597 logs.go:282] 0 containers: []
	W0127 03:01:42.846932  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:01:42.846942  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:01:42.846955  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:01:42.887825  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:01:42.887860  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:01:42.936334  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:01:42.936382  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:01:42.949813  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:01:42.949856  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:01:43.018993  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:01:43.019020  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:01:43.019034  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:01:45.599348  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:45.613254  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:01:45.613351  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:01:45.649722  948597 cri.go:89] found id: ""
	I0127 03:01:45.649750  948597 logs.go:282] 0 containers: []
	W0127 03:01:45.649759  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:01:45.649765  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:01:45.649820  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:01:45.683304  948597 cri.go:89] found id: ""
	I0127 03:01:45.683337  948597 logs.go:282] 0 containers: []
	W0127 03:01:45.683358  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:01:45.683366  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:01:45.683433  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:01:45.720349  948597 cri.go:89] found id: ""
	I0127 03:01:45.720379  948597 logs.go:282] 0 containers: []
	W0127 03:01:45.720388  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:01:45.720393  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:01:45.720444  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:01:45.756037  948597 cri.go:89] found id: ""
	I0127 03:01:45.756066  948597 logs.go:282] 0 containers: []
	W0127 03:01:45.756077  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:01:45.756085  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:01:45.756152  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:01:45.789081  948597 cri.go:89] found id: ""
	I0127 03:01:45.789111  948597 logs.go:282] 0 containers: []
	W0127 03:01:45.789123  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:01:45.789132  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:01:45.789201  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:01:45.825809  948597 cri.go:89] found id: ""
	I0127 03:01:45.825841  948597 logs.go:282] 0 containers: []
	W0127 03:01:45.825852  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:01:45.825860  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:01:45.825923  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:01:45.859304  948597 cri.go:89] found id: ""
	I0127 03:01:45.859339  948597 logs.go:282] 0 containers: []
	W0127 03:01:45.859352  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:01:45.859360  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:01:45.859429  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:01:45.895925  948597 cri.go:89] found id: ""
	I0127 03:01:45.895959  948597 logs.go:282] 0 containers: []
	W0127 03:01:45.895971  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:01:45.895990  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:01:45.896006  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:01:45.910961  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:01:45.910995  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:01:45.982139  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:01:45.982173  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:01:45.982192  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:01:46.067354  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:01:46.067398  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:01:46.105325  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:01:46.105360  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:01:48.658412  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:48.670985  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:01:48.671075  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:01:48.711794  948597 cri.go:89] found id: ""
	I0127 03:01:48.711828  948597 logs.go:282] 0 containers: []
	W0127 03:01:48.711840  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:01:48.711849  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:01:48.711925  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:01:48.754553  948597 cri.go:89] found id: ""
	I0127 03:01:48.754581  948597 logs.go:282] 0 containers: []
	W0127 03:01:48.754592  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:01:48.754600  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:01:48.754667  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:01:48.799891  948597 cri.go:89] found id: ""
	I0127 03:01:48.799917  948597 logs.go:282] 0 containers: []
	W0127 03:01:48.799927  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:01:48.799936  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:01:48.800002  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:01:48.839365  948597 cri.go:89] found id: ""
	I0127 03:01:48.839405  948597 logs.go:282] 0 containers: []
	W0127 03:01:48.839417  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:01:48.839426  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:01:48.839500  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:01:48.888994  948597 cri.go:89] found id: ""
	I0127 03:01:48.889027  948597 logs.go:282] 0 containers: []
	W0127 03:01:48.889038  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:01:48.889046  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:01:48.889126  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:01:48.926255  948597 cri.go:89] found id: ""
	I0127 03:01:48.926290  948597 logs.go:282] 0 containers: []
	W0127 03:01:48.926301  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:01:48.926310  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:01:48.926406  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:01:48.964873  948597 cri.go:89] found id: ""
	I0127 03:01:48.964905  948597 logs.go:282] 0 containers: []
	W0127 03:01:48.964916  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:01:48.964945  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:01:48.965016  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:01:49.006585  948597 cri.go:89] found id: ""
	I0127 03:01:49.006617  948597 logs.go:282] 0 containers: []
	W0127 03:01:49.006627  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:01:49.006638  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:01:49.006653  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:01:49.073243  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:01:49.073293  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:01:49.089518  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:01:49.089553  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:01:49.174857  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:01:49.174892  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:01:49.174909  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:01:49.271349  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:01:49.271404  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:01:51.821324  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:51.839569  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:01:51.839646  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:01:51.877408  948597 cri.go:89] found id: ""
	I0127 03:01:51.877437  948597 logs.go:282] 0 containers: []
	W0127 03:01:51.877444  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:01:51.877450  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:01:51.877506  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:01:51.911605  948597 cri.go:89] found id: ""
	I0127 03:01:51.911654  948597 logs.go:282] 0 containers: []
	W0127 03:01:51.911667  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:01:51.911676  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:01:51.911748  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:01:51.947033  948597 cri.go:89] found id: ""
	I0127 03:01:51.947078  948597 logs.go:282] 0 containers: []
	W0127 03:01:51.947092  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:01:51.947101  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:01:51.947164  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:01:51.979689  948597 cri.go:89] found id: ""
	I0127 03:01:51.979725  948597 logs.go:282] 0 containers: []
	W0127 03:01:51.979736  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:01:51.979744  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:01:51.979826  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:01:52.015971  948597 cri.go:89] found id: ""
	I0127 03:01:52.016011  948597 logs.go:282] 0 containers: []
	W0127 03:01:52.016023  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:01:52.016031  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:01:52.016105  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:01:52.050395  948597 cri.go:89] found id: ""
	I0127 03:01:52.050427  948597 logs.go:282] 0 containers: []
	W0127 03:01:52.050437  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:01:52.050446  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:01:52.050515  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:01:52.084279  948597 cri.go:89] found id: ""
	I0127 03:01:52.084315  948597 logs.go:282] 0 containers: []
	W0127 03:01:52.084327  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:01:52.084336  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:01:52.084411  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:01:52.118989  948597 cri.go:89] found id: ""
	I0127 03:01:52.119022  948597 logs.go:282] 0 containers: []
	W0127 03:01:52.119034  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:01:52.119047  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:01:52.119074  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:01:52.180108  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:01:52.180151  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:01:52.194532  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:01:52.194584  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:01:52.267927  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:01:52.267951  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:01:52.267975  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:01:52.345103  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:01:52.345145  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:01:54.884393  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:54.897841  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:01:54.897943  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:01:54.932485  948597 cri.go:89] found id: ""
	I0127 03:01:54.932524  948597 logs.go:282] 0 containers: []
	W0127 03:01:54.932536  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:01:54.932545  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:01:54.932689  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:01:54.968368  948597 cri.go:89] found id: ""
	I0127 03:01:54.968400  948597 logs.go:282] 0 containers: []
	W0127 03:01:54.968412  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:01:54.968419  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:01:54.968484  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:01:55.001707  948597 cri.go:89] found id: ""
	I0127 03:01:55.001743  948597 logs.go:282] 0 containers: []
	W0127 03:01:55.001755  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:01:55.001762  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:01:55.001835  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:01:55.037616  948597 cri.go:89] found id: ""
	I0127 03:01:55.037654  948597 logs.go:282] 0 containers: []
	W0127 03:01:55.037665  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:01:55.037672  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:01:55.037740  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:01:55.079188  948597 cri.go:89] found id: ""
	I0127 03:01:55.079219  948597 logs.go:282] 0 containers: []
	W0127 03:01:55.079230  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:01:55.079251  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:01:55.079342  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:01:55.128821  948597 cri.go:89] found id: ""
	I0127 03:01:55.128855  948597 logs.go:282] 0 containers: []
	W0127 03:01:55.128864  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:01:55.128872  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:01:55.128969  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:01:55.170723  948597 cri.go:89] found id: ""
	I0127 03:01:55.170751  948597 logs.go:282] 0 containers: []
	W0127 03:01:55.170759  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:01:55.170765  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:01:55.170818  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:01:55.207344  948597 cri.go:89] found id: ""
	I0127 03:01:55.207385  948597 logs.go:282] 0 containers: []
	W0127 03:01:55.207398  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:01:55.207408  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:01:55.207422  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:01:55.288046  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:01:55.288078  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:01:55.288097  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:01:55.366433  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:01:55.366484  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:01:55.403270  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:01:55.403317  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:01:55.455241  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:01:55.455298  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:01:57.970581  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:01:57.987960  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:01:57.988048  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:01:58.035438  948597 cri.go:89] found id: ""
	I0127 03:01:58.035475  948597 logs.go:282] 0 containers: []
	W0127 03:01:58.035485  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:01:58.035494  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:01:58.035565  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:01:58.071013  948597 cri.go:89] found id: ""
	I0127 03:01:58.071053  948597 logs.go:282] 0 containers: []
	W0127 03:01:58.071065  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:01:58.071073  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:01:58.071148  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:01:58.111925  948597 cri.go:89] found id: ""
	I0127 03:01:58.111964  948597 logs.go:282] 0 containers: []
	W0127 03:01:58.111976  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:01:58.111983  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:01:58.112053  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:01:58.146183  948597 cri.go:89] found id: ""
	I0127 03:01:58.146220  948597 logs.go:282] 0 containers: []
	W0127 03:01:58.146230  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:01:58.146238  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:01:58.146310  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:01:58.184977  948597 cri.go:89] found id: ""
	I0127 03:01:58.185005  948597 logs.go:282] 0 containers: []
	W0127 03:01:58.185013  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:01:58.185019  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:01:58.185085  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:01:58.223037  948597 cri.go:89] found id: ""
	I0127 03:01:58.223073  948597 logs.go:282] 0 containers: []
	W0127 03:01:58.223084  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:01:58.223093  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:01:58.223174  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:01:58.258659  948597 cri.go:89] found id: ""
	I0127 03:01:58.258687  948597 logs.go:282] 0 containers: []
	W0127 03:01:58.258695  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:01:58.258701  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:01:58.258753  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:01:58.296174  948597 cri.go:89] found id: ""
	I0127 03:01:58.296209  948597 logs.go:282] 0 containers: []
	W0127 03:01:58.296220  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:01:58.296233  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:01:58.296256  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:01:58.309974  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:01:58.310009  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:01:58.397312  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:01:58.397338  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:01:58.397352  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:01:58.482188  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:01:58.482247  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:01:58.526400  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:01:58.526441  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:02:01.086115  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:01.098319  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:02:01.098400  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:02:01.135609  948597 cri.go:89] found id: ""
	I0127 03:02:01.135645  948597 logs.go:282] 0 containers: []
	W0127 03:02:01.135657  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:02:01.135665  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:02:01.135739  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:02:01.174294  948597 cri.go:89] found id: ""
	I0127 03:02:01.174329  948597 logs.go:282] 0 containers: []
	W0127 03:02:01.174340  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:02:01.174347  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:02:01.174422  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:02:01.210942  948597 cri.go:89] found id: ""
	I0127 03:02:01.210976  948597 logs.go:282] 0 containers: []
	W0127 03:02:01.210987  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:02:01.210995  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:02:01.211069  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:02:01.249566  948597 cri.go:89] found id: ""
	I0127 03:02:01.249599  948597 logs.go:282] 0 containers: []
	W0127 03:02:01.249610  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:02:01.249619  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:02:01.249696  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:02:01.289367  948597 cri.go:89] found id: ""
	I0127 03:02:01.289405  948597 logs.go:282] 0 containers: []
	W0127 03:02:01.289415  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:02:01.289423  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:02:01.289489  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:02:01.324768  948597 cri.go:89] found id: ""
	I0127 03:02:01.324806  948597 logs.go:282] 0 containers: []
	W0127 03:02:01.324816  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:02:01.324824  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:02:01.324876  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:02:01.363159  948597 cri.go:89] found id: ""
	I0127 03:02:01.363192  948597 logs.go:282] 0 containers: []
	W0127 03:02:01.363204  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:02:01.363211  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:02:01.363279  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:02:01.401686  948597 cri.go:89] found id: ""
	I0127 03:02:01.401715  948597 logs.go:282] 0 containers: []
	W0127 03:02:01.401724  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:02:01.401735  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:02:01.401746  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:02:01.443049  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:02:01.443093  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:02:01.495506  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:02:01.495548  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:02:01.509294  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:02:01.509329  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:02:01.574977  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:02:01.575010  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:02:01.575025  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:02:04.174983  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:04.187588  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:02:04.187668  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:02:04.223414  948597 cri.go:89] found id: ""
	I0127 03:02:04.223448  948597 logs.go:282] 0 containers: []
	W0127 03:02:04.223457  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:02:04.223463  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:02:04.223527  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:02:04.259031  948597 cri.go:89] found id: ""
	I0127 03:02:04.259071  948597 logs.go:282] 0 containers: []
	W0127 03:02:04.259083  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:02:04.259091  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:02:04.259165  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:02:04.290320  948597 cri.go:89] found id: ""
	I0127 03:02:04.290357  948597 logs.go:282] 0 containers: []
	W0127 03:02:04.290368  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:02:04.290374  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:02:04.290429  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:02:04.322432  948597 cri.go:89] found id: ""
	I0127 03:02:04.322463  948597 logs.go:282] 0 containers: []
	W0127 03:02:04.322472  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:02:04.322478  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:02:04.322533  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:02:04.356422  948597 cri.go:89] found id: ""
	I0127 03:02:04.356458  948597 logs.go:282] 0 containers: []
	W0127 03:02:04.356466  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:02:04.356472  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:02:04.356526  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:02:04.392999  948597 cri.go:89] found id: ""
	I0127 03:02:04.393034  948597 logs.go:282] 0 containers: []
	W0127 03:02:04.393046  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:02:04.393054  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:02:04.393125  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:02:04.426275  948597 cri.go:89] found id: ""
	I0127 03:02:04.426305  948597 logs.go:282] 0 containers: []
	W0127 03:02:04.426312  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:02:04.426318  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:02:04.426370  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:02:04.460208  948597 cri.go:89] found id: ""
	I0127 03:02:04.460234  948597 logs.go:282] 0 containers: []
	W0127 03:02:04.460242  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:02:04.460252  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:02:04.460263  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:02:04.501349  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:02:04.501387  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:02:04.550576  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:02:04.550611  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:02:04.565042  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:02:04.565081  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:02:04.659906  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:02:04.659935  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:02:04.659953  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:02:07.245086  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:07.257839  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:02:07.257908  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:02:07.296057  948597 cri.go:89] found id: ""
	I0127 03:02:07.296089  948597 logs.go:282] 0 containers: []
	W0127 03:02:07.296098  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:02:07.296104  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:02:07.296177  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:02:07.329833  948597 cri.go:89] found id: ""
	I0127 03:02:07.329886  948597 logs.go:282] 0 containers: []
	W0127 03:02:07.329914  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:02:07.329926  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:02:07.329994  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:02:07.364273  948597 cri.go:89] found id: ""
	I0127 03:02:07.364317  948597 logs.go:282] 0 containers: []
	W0127 03:02:07.364329  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:02:07.364337  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:02:07.364406  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:02:07.399224  948597 cri.go:89] found id: ""
	I0127 03:02:07.399262  948597 logs.go:282] 0 containers: []
	W0127 03:02:07.399274  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:02:07.399282  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:02:07.399377  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:02:07.437153  948597 cri.go:89] found id: ""
	I0127 03:02:07.437194  948597 logs.go:282] 0 containers: []
	W0127 03:02:07.437205  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:02:07.437213  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:02:07.437285  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:02:07.472191  948597 cri.go:89] found id: ""
	I0127 03:02:07.472221  948597 logs.go:282] 0 containers: []
	W0127 03:02:07.472230  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:02:07.472239  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:02:07.472295  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:02:07.507029  948597 cri.go:89] found id: ""
	I0127 03:02:07.507066  948597 logs.go:282] 0 containers: []
	W0127 03:02:07.507078  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:02:07.507086  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:02:07.507185  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:02:07.540312  948597 cri.go:89] found id: ""
	I0127 03:02:07.540348  948597 logs.go:282] 0 containers: []
	W0127 03:02:07.540360  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:02:07.540374  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:02:07.540392  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:02:07.589839  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:02:07.589893  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:02:07.603285  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:02:07.603321  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:02:07.679572  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:02:07.679597  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:02:07.679611  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:02:07.756859  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:02:07.756902  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:02:10.297730  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:10.310440  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:02:10.310510  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:02:10.343835  948597 cri.go:89] found id: ""
	I0127 03:02:10.343871  948597 logs.go:282] 0 containers: []
	W0127 03:02:10.343883  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:02:10.343891  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:02:10.343949  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:02:10.383557  948597 cri.go:89] found id: ""
	I0127 03:02:10.383594  948597 logs.go:282] 0 containers: []
	W0127 03:02:10.383605  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:02:10.383614  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:02:10.383695  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:02:10.426364  948597 cri.go:89] found id: ""
	I0127 03:02:10.426414  948597 logs.go:282] 0 containers: []
	W0127 03:02:10.426425  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:02:10.426432  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:02:10.426513  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:02:10.463567  948597 cri.go:89] found id: ""
	I0127 03:02:10.463621  948597 logs.go:282] 0 containers: []
	W0127 03:02:10.463633  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:02:10.463642  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:02:10.463705  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:02:10.498363  948597 cri.go:89] found id: ""
	I0127 03:02:10.498400  948597 logs.go:282] 0 containers: []
	W0127 03:02:10.498411  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:02:10.498419  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:02:10.498495  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:02:10.532805  948597 cri.go:89] found id: ""
	I0127 03:02:10.532835  948597 logs.go:282] 0 containers: []
	W0127 03:02:10.532847  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:02:10.532854  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:02:10.532951  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:02:10.568537  948597 cri.go:89] found id: ""
	I0127 03:02:10.568573  948597 logs.go:282] 0 containers: []
	W0127 03:02:10.568583  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:02:10.568590  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:02:10.568662  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:02:10.607965  948597 cri.go:89] found id: ""
	I0127 03:02:10.608002  948597 logs.go:282] 0 containers: []
	W0127 03:02:10.608013  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:02:10.608025  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:02:10.608040  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:02:10.658406  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:02:10.658447  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:02:10.671754  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:02:10.671801  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:02:10.741340  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:02:10.741367  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:02:10.741382  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:02:10.817535  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:02:10.817577  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:02:13.364226  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:13.376663  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:02:13.376748  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:02:13.415723  948597 cri.go:89] found id: ""
	I0127 03:02:13.415770  948597 logs.go:282] 0 containers: []
	W0127 03:02:13.415784  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:02:13.415793  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:02:13.415894  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:02:13.453997  948597 cri.go:89] found id: ""
	I0127 03:02:13.454026  948597 logs.go:282] 0 containers: []
	W0127 03:02:13.454034  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:02:13.454040  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:02:13.454099  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:02:13.495966  948597 cri.go:89] found id: ""
	I0127 03:02:13.495998  948597 logs.go:282] 0 containers: []
	W0127 03:02:13.496009  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:02:13.496020  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:02:13.496085  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:02:13.533583  948597 cri.go:89] found id: ""
	I0127 03:02:13.533635  948597 logs.go:282] 0 containers: []
	W0127 03:02:13.533649  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:02:13.533659  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:02:13.533738  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:02:13.571359  948597 cri.go:89] found id: ""
	I0127 03:02:13.571392  948597 logs.go:282] 0 containers: []
	W0127 03:02:13.571401  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:02:13.571408  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:02:13.571473  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:02:13.603720  948597 cri.go:89] found id: ""
	I0127 03:02:13.603748  948597 logs.go:282] 0 containers: []
	W0127 03:02:13.603757  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:02:13.603763  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:02:13.603814  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:02:13.635945  948597 cri.go:89] found id: ""
	I0127 03:02:13.635980  948597 logs.go:282] 0 containers: []
	W0127 03:02:13.635991  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:02:13.635999  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:02:13.636091  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:02:13.668778  948597 cri.go:89] found id: ""
	I0127 03:02:13.668807  948597 logs.go:282] 0 containers: []
	W0127 03:02:13.668821  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:02:13.668838  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:02:13.668853  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:02:13.722543  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:02:13.722591  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:02:13.737899  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:02:13.737927  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:02:13.805217  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:02:13.805249  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:02:13.805264  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:02:13.882548  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:02:13.882590  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:02:16.423402  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:16.436808  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:02:16.436895  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:02:16.473315  948597 cri.go:89] found id: ""
	I0127 03:02:16.473350  948597 logs.go:282] 0 containers: []
	W0127 03:02:16.473361  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:02:16.473370  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:02:16.473440  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:02:16.513258  948597 cri.go:89] found id: ""
	I0127 03:02:16.513292  948597 logs.go:282] 0 containers: []
	W0127 03:02:16.513305  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:02:16.513320  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:02:16.513382  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:02:16.550193  948597 cri.go:89] found id: ""
	I0127 03:02:16.550231  948597 logs.go:282] 0 containers: []
	W0127 03:02:16.550242  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:02:16.550250  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:02:16.550316  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:02:16.586397  948597 cri.go:89] found id: ""
	I0127 03:02:16.586430  948597 logs.go:282] 0 containers: []
	W0127 03:02:16.586440  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:02:16.586448  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:02:16.586512  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:02:16.620605  948597 cri.go:89] found id: ""
	I0127 03:02:16.620642  948597 logs.go:282] 0 containers: []
	W0127 03:02:16.620653  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:02:16.620661  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:02:16.620731  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:02:16.657792  948597 cri.go:89] found id: ""
	I0127 03:02:16.657825  948597 logs.go:282] 0 containers: []
	W0127 03:02:16.657837  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:02:16.657846  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:02:16.657915  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:02:16.695941  948597 cri.go:89] found id: ""
	I0127 03:02:16.695976  948597 logs.go:282] 0 containers: []
	W0127 03:02:16.695996  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:02:16.696006  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:02:16.696097  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:02:16.737119  948597 cri.go:89] found id: ""
	I0127 03:02:16.737152  948597 logs.go:282] 0 containers: []
	W0127 03:02:16.737164  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:02:16.737176  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:02:16.737192  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:02:16.774412  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:02:16.774449  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:02:16.830564  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:02:16.830607  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:02:16.845433  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:02:16.845469  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:02:16.926137  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:02:16.926166  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:02:16.926183  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:02:19.509069  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:19.522347  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:02:19.522429  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:02:19.556817  948597 cri.go:89] found id: ""
	I0127 03:02:19.556856  948597 logs.go:282] 0 containers: []
	W0127 03:02:19.556867  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:02:19.556876  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:02:19.556967  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:02:19.591065  948597 cri.go:89] found id: ""
	I0127 03:02:19.591104  948597 logs.go:282] 0 containers: []
	W0127 03:02:19.591120  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:02:19.591129  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:02:19.591199  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:02:19.626207  948597 cri.go:89] found id: ""
	I0127 03:02:19.626246  948597 logs.go:282] 0 containers: []
	W0127 03:02:19.626260  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:02:19.626266  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:02:19.626320  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:02:19.658517  948597 cri.go:89] found id: ""
	I0127 03:02:19.658551  948597 logs.go:282] 0 containers: []
	W0127 03:02:19.658559  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:02:19.658565  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:02:19.658617  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:02:19.691209  948597 cri.go:89] found id: ""
	I0127 03:02:19.691240  948597 logs.go:282] 0 containers: []
	W0127 03:02:19.691249  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:02:19.691255  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:02:19.691306  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:02:19.728210  948597 cri.go:89] found id: ""
	I0127 03:02:19.728248  948597 logs.go:282] 0 containers: []
	W0127 03:02:19.728260  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:02:19.728270  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:02:19.728332  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:02:19.764049  948597 cri.go:89] found id: ""
	I0127 03:02:19.764083  948597 logs.go:282] 0 containers: []
	W0127 03:02:19.764092  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:02:19.764100  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:02:19.764167  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:02:19.795692  948597 cri.go:89] found id: ""
	I0127 03:02:19.795726  948597 logs.go:282] 0 containers: []
	W0127 03:02:19.795736  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:02:19.795749  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:02:19.795767  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:02:19.808465  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:02:19.808506  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:02:19.879069  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:02:19.879091  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:02:19.879105  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:02:19.960288  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:02:19.960331  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:02:19.997481  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:02:19.997521  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:02:22.551421  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:22.567026  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:02:22.567121  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:02:22.615737  948597 cri.go:89] found id: ""
	I0127 03:02:22.615773  948597 logs.go:282] 0 containers: []
	W0127 03:02:22.615782  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:02:22.615788  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:02:22.615858  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:02:22.659753  948597 cri.go:89] found id: ""
	I0127 03:02:22.659798  948597 logs.go:282] 0 containers: []
	W0127 03:02:22.659810  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:02:22.659817  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:02:22.659891  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:02:22.693156  948597 cri.go:89] found id: ""
	I0127 03:02:22.693192  948597 logs.go:282] 0 containers: []
	W0127 03:02:22.693203  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:02:22.693210  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:02:22.693288  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:02:22.725239  948597 cri.go:89] found id: ""
	I0127 03:02:22.725268  948597 logs.go:282] 0 containers: []
	W0127 03:02:22.725278  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:02:22.725284  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:02:22.725340  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:02:22.760821  948597 cri.go:89] found id: ""
	I0127 03:02:22.760861  948597 logs.go:282] 0 containers: []
	W0127 03:02:22.760874  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:02:22.760883  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:02:22.760977  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:02:22.793734  948597 cri.go:89] found id: ""
	I0127 03:02:22.793763  948597 logs.go:282] 0 containers: []
	W0127 03:02:22.793772  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:02:22.793789  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:02:22.793875  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:02:22.827763  948597 cri.go:89] found id: ""
	I0127 03:02:22.827803  948597 logs.go:282] 0 containers: []
	W0127 03:02:22.827814  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:02:22.827820  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:02:22.827882  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:02:22.863065  948597 cri.go:89] found id: ""
	I0127 03:02:22.863108  948597 logs.go:282] 0 containers: []
	W0127 03:02:22.863120  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:02:22.863132  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:02:22.863145  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:02:22.910867  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:02:22.910913  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:02:22.924232  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:02:22.924263  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:02:22.990323  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:02:22.990345  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:02:22.990358  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:02:23.069076  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:02:23.069138  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:02:25.607860  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:25.621115  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:02:25.621189  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:02:25.655019  948597 cri.go:89] found id: ""
	I0127 03:02:25.655062  948597 logs.go:282] 0 containers: []
	W0127 03:02:25.655074  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:02:25.655083  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:02:25.655158  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:02:25.688118  948597 cri.go:89] found id: ""
	I0127 03:02:25.688149  948597 logs.go:282] 0 containers: []
	W0127 03:02:25.688158  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:02:25.688165  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:02:25.688218  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:02:25.719961  948597 cri.go:89] found id: ""
	I0127 03:02:25.719995  948597 logs.go:282] 0 containers: []
	W0127 03:02:25.720006  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:02:25.720013  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:02:25.720066  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:02:25.751757  948597 cri.go:89] found id: ""
	I0127 03:02:25.751793  948597 logs.go:282] 0 containers: []
	W0127 03:02:25.751805  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:02:25.751813  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:02:25.751874  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:02:25.785054  948597 cri.go:89] found id: ""
	I0127 03:02:25.785090  948597 logs.go:282] 0 containers: []
	W0127 03:02:25.785102  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:02:25.785111  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:02:25.785192  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:02:25.818010  948597 cri.go:89] found id: ""
	I0127 03:02:25.818046  948597 logs.go:282] 0 containers: []
	W0127 03:02:25.818054  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:02:25.818060  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:02:25.818127  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:02:25.849718  948597 cri.go:89] found id: ""
	I0127 03:02:25.849757  948597 logs.go:282] 0 containers: []
	W0127 03:02:25.849768  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:02:25.849776  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:02:25.849837  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:02:25.891145  948597 cri.go:89] found id: ""
	I0127 03:02:25.891185  948597 logs.go:282] 0 containers: []
	W0127 03:02:25.891197  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:02:25.891210  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:02:25.891230  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:02:25.969368  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:02:25.969411  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:02:26.009100  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:02:26.009142  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:02:26.054519  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:02:26.054562  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:02:26.067846  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:02:26.067879  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:02:26.142789  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:02:28.643898  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:28.656621  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:02:28.656692  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:02:28.698197  948597 cri.go:89] found id: ""
	I0127 03:02:28.698228  948597 logs.go:282] 0 containers: []
	W0127 03:02:28.698235  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:02:28.698242  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:02:28.698301  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:02:28.730375  948597 cri.go:89] found id: ""
	I0127 03:02:28.730412  948597 logs.go:282] 0 containers: []
	W0127 03:02:28.730424  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:02:28.730432  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:02:28.730500  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:02:28.764820  948597 cri.go:89] found id: ""
	I0127 03:02:28.764863  948597 logs.go:282] 0 containers: []
	W0127 03:02:28.764879  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:02:28.764887  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:02:28.764983  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:02:28.796878  948597 cri.go:89] found id: ""
	I0127 03:02:28.796912  948597 logs.go:282] 0 containers: []
	W0127 03:02:28.796941  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:02:28.796950  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:02:28.797012  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:02:28.830844  948597 cri.go:89] found id: ""
	I0127 03:02:28.830888  948597 logs.go:282] 0 containers: []
	W0127 03:02:28.830897  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:02:28.830903  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:02:28.830959  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:02:28.863229  948597 cri.go:89] found id: ""
	I0127 03:02:28.863261  948597 logs.go:282] 0 containers: []
	W0127 03:02:28.863272  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:02:28.863280  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:02:28.863341  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:02:28.900738  948597 cri.go:89] found id: ""
	I0127 03:02:28.900780  948597 logs.go:282] 0 containers: []
	W0127 03:02:28.900792  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:02:28.900800  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:02:28.900873  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:02:28.934622  948597 cri.go:89] found id: ""
	I0127 03:02:28.934663  948597 logs.go:282] 0 containers: []
	W0127 03:02:28.934674  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:02:28.934690  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:02:28.934707  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:02:29.014874  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:02:29.014922  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:02:29.066883  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:02:29.066916  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:02:29.121381  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:02:29.121424  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:02:29.135916  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:02:29.135950  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:02:29.201815  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:02:31.702259  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:31.715374  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:02:31.715452  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:02:31.748460  948597 cri.go:89] found id: ""
	I0127 03:02:31.748496  948597 logs.go:282] 0 containers: []
	W0127 03:02:31.748508  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:02:31.748517  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:02:31.748587  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:02:31.780124  948597 cri.go:89] found id: ""
	I0127 03:02:31.780161  948597 logs.go:282] 0 containers: []
	W0127 03:02:31.780173  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:02:31.780180  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:02:31.780247  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:02:31.816546  948597 cri.go:89] found id: ""
	I0127 03:02:31.816579  948597 logs.go:282] 0 containers: []
	W0127 03:02:31.816592  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:02:31.816599  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:02:31.816667  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:02:31.849343  948597 cri.go:89] found id: ""
	I0127 03:02:31.849377  948597 logs.go:282] 0 containers: []
	W0127 03:02:31.849388  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:02:31.849395  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:02:31.849466  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:02:31.881664  948597 cri.go:89] found id: ""
	I0127 03:02:31.881694  948597 logs.go:282] 0 containers: []
	W0127 03:02:31.881703  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:02:31.881710  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:02:31.881764  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:02:31.919480  948597 cri.go:89] found id: ""
	I0127 03:02:31.919518  948597 logs.go:282] 0 containers: []
	W0127 03:02:31.919528  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:02:31.919536  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:02:31.919603  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:02:31.952360  948597 cri.go:89] found id: ""
	I0127 03:02:31.952389  948597 logs.go:282] 0 containers: []
	W0127 03:02:31.952397  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:02:31.952403  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:02:31.952456  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:02:31.987865  948597 cri.go:89] found id: ""
	I0127 03:02:31.987895  948597 logs.go:282] 0 containers: []
	W0127 03:02:31.987903  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:02:31.987914  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:02:31.987927  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:02:32.001095  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:02:32.001130  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:02:32.071197  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:02:32.071229  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:02:32.071246  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:02:32.157042  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:02:32.157089  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:02:32.195293  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:02:32.195328  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:02:34.747191  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:34.759950  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:02:34.760017  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:02:34.794269  948597 cri.go:89] found id: ""
	I0127 03:02:34.794300  948597 logs.go:282] 0 containers: []
	W0127 03:02:34.794309  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:02:34.794316  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:02:34.794372  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:02:34.833580  948597 cri.go:89] found id: ""
	I0127 03:02:34.833617  948597 logs.go:282] 0 containers: []
	W0127 03:02:34.833629  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:02:34.833637  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:02:34.833705  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:02:34.868608  948597 cri.go:89] found id: ""
	I0127 03:02:34.868640  948597 logs.go:282] 0 containers: []
	W0127 03:02:34.868649  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:02:34.868655  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:02:34.868718  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:02:34.901502  948597 cri.go:89] found id: ""
	I0127 03:02:34.901534  948597 logs.go:282] 0 containers: []
	W0127 03:02:34.901544  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:02:34.901550  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:02:34.901603  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:02:34.935196  948597 cri.go:89] found id: ""
	I0127 03:02:34.935231  948597 logs.go:282] 0 containers: []
	W0127 03:02:34.935243  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:02:34.935252  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:02:34.935317  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:02:34.970481  948597 cri.go:89] found id: ""
	I0127 03:02:34.970521  948597 logs.go:282] 0 containers: []
	W0127 03:02:34.970534  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:02:34.970544  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:02:34.970611  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:02:35.003207  948597 cri.go:89] found id: ""
	I0127 03:02:35.003243  948597 logs.go:282] 0 containers: []
	W0127 03:02:35.003255  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:02:35.003270  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:02:35.003328  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:02:35.036258  948597 cri.go:89] found id: ""
	I0127 03:02:35.036289  948597 logs.go:282] 0 containers: []
	W0127 03:02:35.036298  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:02:35.036318  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:02:35.036336  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:02:35.090186  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:02:35.090225  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:02:35.103908  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:02:35.103942  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:02:35.174212  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:02:35.174237  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:02:35.174251  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:02:35.248068  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:02:35.248111  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:02:37.785610  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:37.798369  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:02:37.798457  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:02:37.830553  948597 cri.go:89] found id: ""
	I0127 03:02:37.830593  948597 logs.go:282] 0 containers: []
	W0127 03:02:37.830605  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:02:37.830615  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:02:37.830679  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:02:37.861930  948597 cri.go:89] found id: ""
	I0127 03:02:37.861964  948597 logs.go:282] 0 containers: []
	W0127 03:02:37.861973  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:02:37.861979  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:02:37.862040  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:02:37.893267  948597 cri.go:89] found id: ""
	I0127 03:02:37.893302  948597 logs.go:282] 0 containers: []
	W0127 03:02:37.893314  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:02:37.893323  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:02:37.893382  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:02:37.929928  948597 cri.go:89] found id: ""
	I0127 03:02:37.929958  948597 logs.go:282] 0 containers: []
	W0127 03:02:37.929967  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:02:37.929973  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:02:37.930034  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:02:37.964592  948597 cri.go:89] found id: ""
	I0127 03:02:37.964622  948597 logs.go:282] 0 containers: []
	W0127 03:02:37.964631  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:02:37.964637  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:02:37.964707  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:02:37.997396  948597 cri.go:89] found id: ""
	I0127 03:02:37.997434  948597 logs.go:282] 0 containers: []
	W0127 03:02:37.997443  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:02:37.997450  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:02:37.997512  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:02:38.030060  948597 cri.go:89] found id: ""
	I0127 03:02:38.030094  948597 logs.go:282] 0 containers: []
	W0127 03:02:38.030106  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:02:38.030116  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:02:38.030184  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:02:38.068588  948597 cri.go:89] found id: ""
	I0127 03:02:38.068616  948597 logs.go:282] 0 containers: []
	W0127 03:02:38.068624  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:02:38.068635  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:02:38.068647  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:02:38.122002  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:02:38.122059  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:02:38.137266  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:02:38.137304  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:02:38.214548  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:02:38.214578  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:02:38.214597  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:02:38.294408  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:02:38.294453  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:02:40.845126  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:40.858786  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:02:40.858871  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:02:40.897021  948597 cri.go:89] found id: ""
	I0127 03:02:40.897063  948597 logs.go:282] 0 containers: []
	W0127 03:02:40.897076  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:02:40.897084  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:02:40.897161  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:02:40.937138  948597 cri.go:89] found id: ""
	I0127 03:02:40.937173  948597 logs.go:282] 0 containers: []
	W0127 03:02:40.937185  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:02:40.937193  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:02:40.937258  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:02:40.974746  948597 cri.go:89] found id: ""
	I0127 03:02:40.974780  948597 logs.go:282] 0 containers: []
	W0127 03:02:40.974792  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:02:40.974800  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:02:40.974872  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:02:41.011838  948597 cri.go:89] found id: ""
	I0127 03:02:41.011869  948597 logs.go:282] 0 containers: []
	W0127 03:02:41.011880  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:02:41.011888  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:02:41.011961  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:02:41.047294  948597 cri.go:89] found id: ""
	I0127 03:02:41.047325  948597 logs.go:282] 0 containers: []
	W0127 03:02:41.047337  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:02:41.047344  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:02:41.047426  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:02:41.082188  948597 cri.go:89] found id: ""
	I0127 03:02:41.082222  948597 logs.go:282] 0 containers: []
	W0127 03:02:41.082234  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:02:41.082241  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:02:41.082311  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:02:41.117046  948597 cri.go:89] found id: ""
	I0127 03:02:41.117082  948597 logs.go:282] 0 containers: []
	W0127 03:02:41.117093  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:02:41.117099  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:02:41.117169  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:02:41.154963  948597 cri.go:89] found id: ""
	I0127 03:02:41.154995  948597 logs.go:282] 0 containers: []
	W0127 03:02:41.155004  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:02:41.155014  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:02:41.155027  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:02:41.206373  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:02:41.206443  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:02:41.222908  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:02:41.222940  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:02:41.300876  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:02:41.300903  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:02:41.300936  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:02:41.381123  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:02:41.381165  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:02:43.921070  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:43.937054  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:02:43.937144  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:02:43.974834  948597 cri.go:89] found id: ""
	I0127 03:02:43.974869  948597 logs.go:282] 0 containers: []
	W0127 03:02:43.974880  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:02:43.974889  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:02:43.974953  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:02:44.008986  948597 cri.go:89] found id: ""
	I0127 03:02:44.009027  948597 logs.go:282] 0 containers: []
	W0127 03:02:44.009062  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:02:44.009072  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:02:44.009160  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:02:44.040585  948597 cri.go:89] found id: ""
	I0127 03:02:44.040616  948597 logs.go:282] 0 containers: []
	W0127 03:02:44.040625  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:02:44.040631  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:02:44.040703  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:02:44.079406  948597 cri.go:89] found id: ""
	I0127 03:02:44.079432  948597 logs.go:282] 0 containers: []
	W0127 03:02:44.079439  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:02:44.079445  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:02:44.079495  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:02:44.112089  948597 cri.go:89] found id: ""
	I0127 03:02:44.112118  948597 logs.go:282] 0 containers: []
	W0127 03:02:44.112134  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:02:44.112144  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:02:44.112206  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:02:44.145509  948597 cri.go:89] found id: ""
	I0127 03:02:44.145544  948597 logs.go:282] 0 containers: []
	W0127 03:02:44.145555  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:02:44.145563  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:02:44.145643  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:02:44.186775  948597 cri.go:89] found id: ""
	I0127 03:02:44.186804  948597 logs.go:282] 0 containers: []
	W0127 03:02:44.186823  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:02:44.186830  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:02:44.186890  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:02:44.221445  948597 cri.go:89] found id: ""
	I0127 03:02:44.221483  948597 logs.go:282] 0 containers: []
	W0127 03:02:44.221495  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:02:44.221511  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:02:44.221530  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:02:44.261993  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:02:44.262028  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:02:44.335242  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:02:44.335299  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:02:44.350005  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:02:44.350042  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:02:44.413941  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:02:44.413965  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:02:44.413982  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:02:46.991377  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:47.004881  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:02:47.004973  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:02:47.042773  948597 cri.go:89] found id: ""
	I0127 03:02:47.042821  948597 logs.go:282] 0 containers: []
	W0127 03:02:47.042834  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:02:47.042842  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:02:47.042920  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:02:47.080578  948597 cri.go:89] found id: ""
	I0127 03:02:47.080608  948597 logs.go:282] 0 containers: []
	W0127 03:02:47.080618  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:02:47.080628  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:02:47.080704  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:02:47.120594  948597 cri.go:89] found id: ""
	I0127 03:02:47.120620  948597 logs.go:282] 0 containers: []
	W0127 03:02:47.120628  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:02:47.120634  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:02:47.120693  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:02:47.155304  948597 cri.go:89] found id: ""
	I0127 03:02:47.155354  948597 logs.go:282] 0 containers: []
	W0127 03:02:47.155367  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:02:47.155376  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:02:47.155444  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:02:47.192138  948597 cri.go:89] found id: ""
	I0127 03:02:47.192174  948597 logs.go:282] 0 containers: []
	W0127 03:02:47.192184  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:02:47.192192  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:02:47.192258  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:02:47.230739  948597 cri.go:89] found id: ""
	I0127 03:02:47.230769  948597 logs.go:282] 0 containers: []
	W0127 03:02:47.230783  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:02:47.230800  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:02:47.230865  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:02:47.267286  948597 cri.go:89] found id: ""
	I0127 03:02:47.267329  948597 logs.go:282] 0 containers: []
	W0127 03:02:47.267341  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:02:47.267350  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:02:47.267420  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:02:47.307436  948597 cri.go:89] found id: ""
	I0127 03:02:47.307476  948597 logs.go:282] 0 containers: []
	W0127 03:02:47.307487  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:02:47.307504  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:02:47.307522  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:02:47.322640  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:02:47.322687  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:02:47.424519  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:02:47.424555  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:02:47.424580  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:02:47.519021  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:02:47.519066  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:02:47.563116  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:02:47.563161  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:02:50.137734  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:50.151186  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:02:50.151259  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:02:50.190817  948597 cri.go:89] found id: ""
	I0127 03:02:50.190848  948597 logs.go:282] 0 containers: []
	W0127 03:02:50.190859  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:02:50.190868  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:02:50.190929  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:02:50.233878  948597 cri.go:89] found id: ""
	I0127 03:02:50.233916  948597 logs.go:282] 0 containers: []
	W0127 03:02:50.233927  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:02:50.233935  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:02:50.233997  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:02:50.273135  948597 cri.go:89] found id: ""
	I0127 03:02:50.273168  948597 logs.go:282] 0 containers: []
	W0127 03:02:50.273180  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:02:50.273187  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:02:50.273248  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:02:50.310017  948597 cri.go:89] found id: ""
	I0127 03:02:50.310054  948597 logs.go:282] 0 containers: []
	W0127 03:02:50.310067  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:02:50.310076  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:02:50.310144  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:02:50.349345  948597 cri.go:89] found id: ""
	I0127 03:02:50.349375  948597 logs.go:282] 0 containers: []
	W0127 03:02:50.349387  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:02:50.349413  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:02:50.349476  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:02:50.404790  948597 cri.go:89] found id: ""
	I0127 03:02:50.404828  948597 logs.go:282] 0 containers: []
	W0127 03:02:50.404840  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:02:50.404849  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:02:50.404903  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:02:50.440618  948597 cri.go:89] found id: ""
	I0127 03:02:50.440649  948597 logs.go:282] 0 containers: []
	W0127 03:02:50.440659  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:02:50.440665  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:02:50.440711  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:02:50.487718  948597 cri.go:89] found id: ""
	I0127 03:02:50.487755  948597 logs.go:282] 0 containers: []
	W0127 03:02:50.487766  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:02:50.487779  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:02:50.487822  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:02:50.609448  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:02:50.609504  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:02:50.675188  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:02:50.675226  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:02:50.732431  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:02:50.732471  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:02:50.749181  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:02:50.749224  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:02:50.824904  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:02:53.325129  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:53.338949  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:02:53.339016  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:02:53.377836  948597 cri.go:89] found id: ""
	I0127 03:02:53.377863  948597 logs.go:282] 0 containers: []
	W0127 03:02:53.377871  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:02:53.377877  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:02:53.377930  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:02:53.416913  948597 cri.go:89] found id: ""
	I0127 03:02:53.416967  948597 logs.go:282] 0 containers: []
	W0127 03:02:53.416978  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:02:53.416986  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:02:53.417060  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:02:53.454824  948597 cri.go:89] found id: ""
	I0127 03:02:53.454851  948597 logs.go:282] 0 containers: []
	W0127 03:02:53.454862  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:02:53.454877  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:02:53.454949  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:02:53.493292  948597 cri.go:89] found id: ""
	I0127 03:02:53.493324  948597 logs.go:282] 0 containers: []
	W0127 03:02:53.493332  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:02:53.493339  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:02:53.493403  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:02:53.531858  948597 cri.go:89] found id: ""
	I0127 03:02:53.531891  948597 logs.go:282] 0 containers: []
	W0127 03:02:53.531900  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:02:53.531906  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:02:53.531956  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:02:53.574685  948597 cri.go:89] found id: ""
	I0127 03:02:53.574715  948597 logs.go:282] 0 containers: []
	W0127 03:02:53.574726  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:02:53.574734  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:02:53.574805  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:02:53.609903  948597 cri.go:89] found id: ""
	I0127 03:02:53.609944  948597 logs.go:282] 0 containers: []
	W0127 03:02:53.609955  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:02:53.609962  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:02:53.610019  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:02:53.647339  948597 cri.go:89] found id: ""
	I0127 03:02:53.647378  948597 logs.go:282] 0 containers: []
	W0127 03:02:53.647391  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:02:53.647406  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:02:53.647423  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:02:53.696028  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:02:53.696065  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:02:53.761023  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:02:53.761064  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:02:53.774967  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:02:53.775014  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:02:53.852061  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:02:53.852086  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:02:53.852102  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:02:56.452212  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:56.466008  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:02:56.466079  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:02:56.503939  948597 cri.go:89] found id: ""
	I0127 03:02:56.503971  948597 logs.go:282] 0 containers: []
	W0127 03:02:56.503979  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:02:56.503986  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:02:56.504038  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:02:56.545889  948597 cri.go:89] found id: ""
	I0127 03:02:56.545927  948597 logs.go:282] 0 containers: []
	W0127 03:02:56.545939  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:02:56.545948  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:02:56.546019  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:02:56.587938  948597 cri.go:89] found id: ""
	I0127 03:02:56.587972  948597 logs.go:282] 0 containers: []
	W0127 03:02:56.587983  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:02:56.587998  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:02:56.588080  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:02:56.626257  948597 cri.go:89] found id: ""
	I0127 03:02:56.626338  948597 logs.go:282] 0 containers: []
	W0127 03:02:56.626354  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:02:56.626362  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:02:56.626428  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:02:56.681294  948597 cri.go:89] found id: ""
	I0127 03:02:56.681328  948597 logs.go:282] 0 containers: []
	W0127 03:02:56.681339  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:02:56.681347  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:02:56.681412  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:02:56.716951  948597 cri.go:89] found id: ""
	I0127 03:02:56.716983  948597 logs.go:282] 0 containers: []
	W0127 03:02:56.716991  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:02:56.716998  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:02:56.717052  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:02:56.753410  948597 cri.go:89] found id: ""
	I0127 03:02:56.753442  948597 logs.go:282] 0 containers: []
	W0127 03:02:56.753451  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:02:56.753458  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:02:56.753513  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:02:56.787678  948597 cri.go:89] found id: ""
	I0127 03:02:56.787711  948597 logs.go:282] 0 containers: []
	W0127 03:02:56.787724  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:02:56.787737  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:02:56.787754  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:02:56.842240  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:02:56.842279  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:02:56.856907  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:02:56.856955  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:02:56.926521  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:02:56.926542  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:02:56.926556  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:02:57.015810  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:02:57.015856  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:02:59.558063  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:02:59.575620  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:02:59.575712  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:02:59.618658  948597 cri.go:89] found id: ""
	I0127 03:02:59.618697  948597 logs.go:282] 0 containers: []
	W0127 03:02:59.618708  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:02:59.618717  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:02:59.618820  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:02:59.659120  948597 cri.go:89] found id: ""
	I0127 03:02:59.659159  948597 logs.go:282] 0 containers: []
	W0127 03:02:59.659170  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:02:59.659179  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:02:59.659249  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:02:59.704278  948597 cri.go:89] found id: ""
	I0127 03:02:59.704316  948597 logs.go:282] 0 containers: []
	W0127 03:02:59.704328  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:02:59.704337  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:02:59.704413  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:02:59.747083  948597 cri.go:89] found id: ""
	I0127 03:02:59.747124  948597 logs.go:282] 0 containers: []
	W0127 03:02:59.747136  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:02:59.747146  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:02:59.747218  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:02:59.784943  948597 cri.go:89] found id: ""
	I0127 03:02:59.784977  948597 logs.go:282] 0 containers: []
	W0127 03:02:59.784990  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:02:59.784998  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:02:59.785070  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:02:59.829715  948597 cri.go:89] found id: ""
	I0127 03:02:59.829752  948597 logs.go:282] 0 containers: []
	W0127 03:02:59.829765  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:02:59.829773  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:02:59.829850  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:02:59.875367  948597 cri.go:89] found id: ""
	I0127 03:02:59.875395  948597 logs.go:282] 0 containers: []
	W0127 03:02:59.875402  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:02:59.875408  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:02:59.875469  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:02:59.921766  948597 cri.go:89] found id: ""
	I0127 03:02:59.921805  948597 logs.go:282] 0 containers: []
	W0127 03:02:59.921817  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:02:59.921831  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:02:59.921849  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:02:59.969419  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:02:59.969464  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:03:00.021625  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:03:00.021668  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:03:00.037495  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:03:00.037534  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:03:00.130513  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:03:00.130618  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:03:00.130657  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:03:02.750964  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:03:02.765700  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:03:02.765787  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:03:02.810731  948597 cri.go:89] found id: ""
	I0127 03:03:02.810767  948597 logs.go:282] 0 containers: []
	W0127 03:03:02.810780  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:03:02.810789  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:03:02.810856  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:03:02.850700  948597 cri.go:89] found id: ""
	I0127 03:03:02.850738  948597 logs.go:282] 0 containers: []
	W0127 03:03:02.850751  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:03:02.850759  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:03:02.850830  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:03:02.896441  948597 cri.go:89] found id: ""
	I0127 03:03:02.896481  948597 logs.go:282] 0 containers: []
	W0127 03:03:02.896494  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:03:02.896503  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:03:02.896573  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:03:02.937978  948597 cri.go:89] found id: ""
	I0127 03:03:02.938011  948597 logs.go:282] 0 containers: []
	W0127 03:03:02.938019  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:03:02.938025  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:03:02.938098  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:03:02.975403  948597 cri.go:89] found id: ""
	I0127 03:03:02.975435  948597 logs.go:282] 0 containers: []
	W0127 03:03:02.975446  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:03:02.975454  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:03:02.975526  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:03:03.019529  948597 cri.go:89] found id: ""
	I0127 03:03:03.019592  948597 logs.go:282] 0 containers: []
	W0127 03:03:03.019607  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:03:03.019616  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:03:03.019688  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:03:03.071638  948597 cri.go:89] found id: ""
	I0127 03:03:03.071672  948597 logs.go:282] 0 containers: []
	W0127 03:03:03.071683  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:03:03.071692  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:03:03.071764  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:03:03.121172  948597 cri.go:89] found id: ""
	I0127 03:03:03.121210  948597 logs.go:282] 0 containers: []
	W0127 03:03:03.121223  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:03:03.121237  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:03:03.121254  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:03:03.178350  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:03:03.178397  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:03:03.197225  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:03:03.197258  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:03:03.268180  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:03:03.268207  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:03:03.268225  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:03:03.359036  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:03:03.359094  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:03:05.908262  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:03:05.928188  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:03:05.928273  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:03:05.974037  948597 cri.go:89] found id: ""
	I0127 03:03:05.974069  948597 logs.go:282] 0 containers: []
	W0127 03:03:05.974081  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:03:05.974089  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:03:05.974166  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:03:06.021150  948597 cri.go:89] found id: ""
	I0127 03:03:06.021188  948597 logs.go:282] 0 containers: []
	W0127 03:03:06.021201  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:03:06.021209  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:03:06.021278  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:03:06.058163  948597 cri.go:89] found id: ""
	I0127 03:03:06.058195  948597 logs.go:282] 0 containers: []
	W0127 03:03:06.058205  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:03:06.058220  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:03:06.058289  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:03:06.098028  948597 cri.go:89] found id: ""
	I0127 03:03:06.098056  948597 logs.go:282] 0 containers: []
	W0127 03:03:06.098064  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:03:06.098070  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:03:06.098126  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:03:06.144846  948597 cri.go:89] found id: ""
	I0127 03:03:06.144891  948597 logs.go:282] 0 containers: []
	W0127 03:03:06.144903  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:03:06.144911  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:03:06.145004  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:03:06.190515  948597 cri.go:89] found id: ""
	I0127 03:03:06.190544  948597 logs.go:282] 0 containers: []
	W0127 03:03:06.190554  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:03:06.190562  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:03:06.190631  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:03:06.224099  948597 cri.go:89] found id: ""
	I0127 03:03:06.224135  948597 logs.go:282] 0 containers: []
	W0127 03:03:06.224146  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:03:06.224155  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:03:06.224225  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:03:06.263378  948597 cri.go:89] found id: ""
	I0127 03:03:06.263413  948597 logs.go:282] 0 containers: []
	W0127 03:03:06.263424  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:03:06.263438  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:03:06.263455  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:03:06.355670  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:03:06.355715  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:03:06.399146  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:03:06.399183  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:03:06.449505  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:03:06.449545  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:03:06.465402  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:03:06.465435  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:03:06.537899  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:03:09.038843  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:03:09.063652  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:03:09.063732  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:03:09.115414  948597 cri.go:89] found id: ""
	I0127 03:03:09.115455  948597 logs.go:282] 0 containers: []
	W0127 03:03:09.115466  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:03:09.115475  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:03:09.115587  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:03:09.165644  948597 cri.go:89] found id: ""
	I0127 03:03:09.165677  948597 logs.go:282] 0 containers: []
	W0127 03:03:09.165688  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:03:09.165695  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:03:09.165774  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:03:09.200043  948597 cri.go:89] found id: ""
	I0127 03:03:09.200077  948597 logs.go:282] 0 containers: []
	W0127 03:03:09.200087  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:03:09.200095  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:03:09.200177  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:03:09.234108  948597 cri.go:89] found id: ""
	I0127 03:03:09.234148  948597 logs.go:282] 0 containers: []
	W0127 03:03:09.234159  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:03:09.234165  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:03:09.234226  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:03:09.268944  948597 cri.go:89] found id: ""
	I0127 03:03:09.268979  948597 logs.go:282] 0 containers: []
	W0127 03:03:09.268990  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:03:09.268998  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:03:09.269068  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:03:09.303604  948597 cri.go:89] found id: ""
	I0127 03:03:09.303643  948597 logs.go:282] 0 containers: []
	W0127 03:03:09.303656  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:03:09.303666  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:03:09.303726  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:03:09.337876  948597 cri.go:89] found id: ""
	I0127 03:03:09.337915  948597 logs.go:282] 0 containers: []
	W0127 03:03:09.337927  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:03:09.337937  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:03:09.338006  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:03:09.373218  948597 cri.go:89] found id: ""
	I0127 03:03:09.373255  948597 logs.go:282] 0 containers: []
	W0127 03:03:09.373267  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:03:09.373280  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:03:09.373292  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:03:09.412001  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:03:09.412037  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:03:09.465996  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:03:09.466041  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:03:09.481740  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:03:09.481773  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:03:09.547976  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:03:09.548000  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:03:09.548013  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:03:12.128196  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:03:12.142859  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:03:12.142943  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:03:12.177300  948597 cri.go:89] found id: ""
	I0127 03:03:12.177338  948597 logs.go:282] 0 containers: []
	W0127 03:03:12.177350  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:03:12.177358  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:03:12.177427  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:03:12.211344  948597 cri.go:89] found id: ""
	I0127 03:03:12.211385  948597 logs.go:282] 0 containers: []
	W0127 03:03:12.211397  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:03:12.211405  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:03:12.211464  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:03:12.243671  948597 cri.go:89] found id: ""
	I0127 03:03:12.243700  948597 logs.go:282] 0 containers: []
	W0127 03:03:12.243709  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:03:12.243716  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:03:12.243779  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:03:12.277531  948597 cri.go:89] found id: ""
	I0127 03:03:12.277564  948597 logs.go:282] 0 containers: []
	W0127 03:03:12.277574  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:03:12.277581  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:03:12.277656  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:03:12.311867  948597 cri.go:89] found id: ""
	I0127 03:03:12.311905  948597 logs.go:282] 0 containers: []
	W0127 03:03:12.311918  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:03:12.311926  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:03:12.311999  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:03:12.348592  948597 cri.go:89] found id: ""
	I0127 03:03:12.348628  948597 logs.go:282] 0 containers: []
	W0127 03:03:12.348640  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:03:12.348648  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:03:12.348726  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:03:12.380121  948597 cri.go:89] found id: ""
	I0127 03:03:12.380160  948597 logs.go:282] 0 containers: []
	W0127 03:03:12.380172  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:03:12.380181  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:03:12.380264  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:03:12.412738  948597 cri.go:89] found id: ""
	I0127 03:03:12.412774  948597 logs.go:282] 0 containers: []
	W0127 03:03:12.412786  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:03:12.412800  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:03:12.412817  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:03:12.468265  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:03:12.468304  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:03:12.484249  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:03:12.484297  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:03:12.555992  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:03:12.556018  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:03:12.556030  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:03:12.636075  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:03:12.636126  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:03:15.170709  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:03:15.189085  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:03:15.189174  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:03:15.234586  948597 cri.go:89] found id: ""
	I0127 03:03:15.234620  948597 logs.go:282] 0 containers: []
	W0127 03:03:15.234631  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:03:15.234639  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:03:15.234710  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:03:15.274190  948597 cri.go:89] found id: ""
	I0127 03:03:15.274221  948597 logs.go:282] 0 containers: []
	W0127 03:03:15.274232  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:03:15.274239  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:03:15.274304  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:03:15.319757  948597 cri.go:89] found id: ""
	I0127 03:03:15.319795  948597 logs.go:282] 0 containers: []
	W0127 03:03:15.319806  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:03:15.319815  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:03:15.319880  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:03:15.359479  948597 cri.go:89] found id: ""
	I0127 03:03:15.359516  948597 logs.go:282] 0 containers: []
	W0127 03:03:15.359528  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:03:15.359536  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:03:15.359608  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:03:15.401031  948597 cri.go:89] found id: ""
	I0127 03:03:15.401071  948597 logs.go:282] 0 containers: []
	W0127 03:03:15.401083  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:03:15.401092  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:03:15.401166  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:03:15.440028  948597 cri.go:89] found id: ""
	I0127 03:03:15.440063  948597 logs.go:282] 0 containers: []
	W0127 03:03:15.440074  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:03:15.440084  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:03:15.440147  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:03:15.479080  948597 cri.go:89] found id: ""
	I0127 03:03:15.479112  948597 logs.go:282] 0 containers: []
	W0127 03:03:15.479123  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:03:15.479132  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:03:15.479198  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:03:15.523368  948597 cri.go:89] found id: ""
	I0127 03:03:15.523402  948597 logs.go:282] 0 containers: []
	W0127 03:03:15.523414  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:03:15.523427  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:03:15.523444  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:03:15.583816  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:03:15.583855  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:03:15.601270  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:03:15.601314  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:03:15.678967  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:03:15.678998  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:03:15.679013  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:03:15.754685  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:03:15.754726  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:03:18.292460  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:03:18.307739  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:03:18.307821  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:03:18.352730  948597 cri.go:89] found id: ""
	I0127 03:03:18.352759  948597 logs.go:282] 0 containers: []
	W0127 03:03:18.352770  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:03:18.352779  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:03:18.352845  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:03:18.388950  948597 cri.go:89] found id: ""
	I0127 03:03:18.388987  948597 logs.go:282] 0 containers: []
	W0127 03:03:18.388998  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:03:18.389006  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:03:18.389085  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:03:18.430768  948597 cri.go:89] found id: ""
	I0127 03:03:18.430803  948597 logs.go:282] 0 containers: []
	W0127 03:03:18.430815  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:03:18.430824  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:03:18.430898  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:03:18.466368  948597 cri.go:89] found id: ""
	I0127 03:03:18.466405  948597 logs.go:282] 0 containers: []
	W0127 03:03:18.466416  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:03:18.466425  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:03:18.466497  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:03:18.503446  948597 cri.go:89] found id: ""
	I0127 03:03:18.503478  948597 logs.go:282] 0 containers: []
	W0127 03:03:18.503492  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:03:18.503498  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:03:18.503551  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:03:18.542072  948597 cri.go:89] found id: ""
	I0127 03:03:18.542137  948597 logs.go:282] 0 containers: []
	W0127 03:03:18.542149  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:03:18.542159  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:03:18.542228  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:03:18.586333  948597 cri.go:89] found id: ""
	I0127 03:03:18.586368  948597 logs.go:282] 0 containers: []
	W0127 03:03:18.586381  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:03:18.586390  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:03:18.586477  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:03:18.632109  948597 cri.go:89] found id: ""
	I0127 03:03:18.632138  948597 logs.go:282] 0 containers: []
	W0127 03:03:18.632146  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:03:18.632156  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:03:18.632169  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:03:18.700015  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:03:18.700067  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:03:18.719497  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:03:18.719541  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:03:18.824961  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:03:18.824992  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:03:18.825011  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:03:18.926269  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:03:18.926317  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:03:21.471624  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:03:21.489856  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:03:21.489929  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:03:21.531003  948597 cri.go:89] found id: ""
	I0127 03:03:21.531049  948597 logs.go:282] 0 containers: []
	W0127 03:03:21.531060  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:03:21.531069  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:03:21.531141  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:03:21.564738  948597 cri.go:89] found id: ""
	I0127 03:03:21.564776  948597 logs.go:282] 0 containers: []
	W0127 03:03:21.564787  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:03:21.564795  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:03:21.564868  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:03:21.602149  948597 cri.go:89] found id: ""
	I0127 03:03:21.602183  948597 logs.go:282] 0 containers: []
	W0127 03:03:21.602193  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:03:21.602202  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:03:21.602267  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:03:21.640128  948597 cri.go:89] found id: ""
	I0127 03:03:21.640163  948597 logs.go:282] 0 containers: []
	W0127 03:03:21.640175  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:03:21.640184  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:03:21.640255  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:03:21.678973  948597 cri.go:89] found id: ""
	I0127 03:03:21.679007  948597 logs.go:282] 0 containers: []
	W0127 03:03:21.679019  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:03:21.679026  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:03:21.679090  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:03:21.712411  948597 cri.go:89] found id: ""
	I0127 03:03:21.712453  948597 logs.go:282] 0 containers: []
	W0127 03:03:21.712466  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:03:21.712478  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:03:21.712550  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:03:21.748175  948597 cri.go:89] found id: ""
	I0127 03:03:21.748208  948597 logs.go:282] 0 containers: []
	W0127 03:03:21.748218  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:03:21.748225  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:03:21.748289  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:03:21.788473  948597 cri.go:89] found id: ""
	I0127 03:03:21.788510  948597 logs.go:282] 0 containers: []
	W0127 03:03:21.788522  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:03:21.788536  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:03:21.788548  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:03:21.857295  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:03:21.857350  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:03:21.871109  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:03:21.871162  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:03:21.941486  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:03:21.941519  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:03:21.941536  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:03:22.028245  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:03:22.028282  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:03:24.569122  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:03:24.584874  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:03:24.584972  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:03:24.621014  948597 cri.go:89] found id: ""
	I0127 03:03:24.621046  948597 logs.go:282] 0 containers: []
	W0127 03:03:24.621058  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:03:24.621068  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:03:24.621145  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:03:24.656425  948597 cri.go:89] found id: ""
	I0127 03:03:24.656463  948597 logs.go:282] 0 containers: []
	W0127 03:03:24.656473  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:03:24.656481  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:03:24.656543  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:03:24.691257  948597 cri.go:89] found id: ""
	I0127 03:03:24.691289  948597 logs.go:282] 0 containers: []
	W0127 03:03:24.691301  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:03:24.691309  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:03:24.691377  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:03:24.724789  948597 cri.go:89] found id: ""
	I0127 03:03:24.724825  948597 logs.go:282] 0 containers: []
	W0127 03:03:24.724837  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:03:24.724844  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:03:24.724914  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:03:24.759164  948597 cri.go:89] found id: ""
	I0127 03:03:24.759200  948597 logs.go:282] 0 containers: []
	W0127 03:03:24.759208  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:03:24.759217  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:03:24.759296  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:03:24.794442  948597 cri.go:89] found id: ""
	I0127 03:03:24.794477  948597 logs.go:282] 0 containers: []
	W0127 03:03:24.794487  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:03:24.794495  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:03:24.794564  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:03:24.827690  948597 cri.go:89] found id: ""
	I0127 03:03:24.827726  948597 logs.go:282] 0 containers: []
	W0127 03:03:24.827738  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:03:24.827746  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:03:24.827812  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:03:24.865767  948597 cri.go:89] found id: ""
	I0127 03:03:24.865815  948597 logs.go:282] 0 containers: []
	W0127 03:03:24.865824  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:03:24.865835  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:03:24.865848  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:03:24.919633  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:03:24.919679  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:03:24.933781  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:03:24.933822  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:03:25.005491  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:03:25.005521  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:03:25.005541  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:03:25.091645  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:03:25.091686  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:03:27.638216  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:03:27.652174  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:03:27.652278  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:03:27.689320  948597 cri.go:89] found id: ""
	I0127 03:03:27.689353  948597 logs.go:282] 0 containers: []
	W0127 03:03:27.689364  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:03:27.689372  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:03:27.689460  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:03:27.725560  948597 cri.go:89] found id: ""
	I0127 03:03:27.725597  948597 logs.go:282] 0 containers: []
	W0127 03:03:27.725609  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:03:27.725617  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:03:27.725680  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:03:27.758244  948597 cri.go:89] found id: ""
	I0127 03:03:27.758281  948597 logs.go:282] 0 containers: []
	W0127 03:03:27.758302  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:03:27.758311  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:03:27.758382  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:03:27.791514  948597 cri.go:89] found id: ""
	I0127 03:03:27.791548  948597 logs.go:282] 0 containers: []
	W0127 03:03:27.791560  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:03:27.791569  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:03:27.791656  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:03:27.829962  948597 cri.go:89] found id: ""
	I0127 03:03:27.829992  948597 logs.go:282] 0 containers: []
	W0127 03:03:27.830000  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:03:27.830006  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:03:27.830079  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:03:27.870070  948597 cri.go:89] found id: ""
	I0127 03:03:27.870104  948597 logs.go:282] 0 containers: []
	W0127 03:03:27.870112  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:03:27.870118  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:03:27.870176  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:03:27.912321  948597 cri.go:89] found id: ""
	I0127 03:03:27.912361  948597 logs.go:282] 0 containers: []
	W0127 03:03:27.912373  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:03:27.912382  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:03:27.912454  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:03:27.947164  948597 cri.go:89] found id: ""
	I0127 03:03:27.947195  948597 logs.go:282] 0 containers: []
	W0127 03:03:27.947203  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:03:27.947214  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:03:27.947227  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:03:27.999595  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:03:27.999639  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:03:28.015808  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:03:28.015844  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:03:28.106050  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:03:28.106079  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:03:28.106103  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:03:28.211931  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:03:28.211975  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:03:30.762699  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:03:30.780169  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:03:30.780257  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:03:30.815012  948597 cri.go:89] found id: ""
	I0127 03:03:30.815054  948597 logs.go:282] 0 containers: []
	W0127 03:03:30.815066  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:03:30.815075  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:03:30.815134  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:03:30.853097  948597 cri.go:89] found id: ""
	I0127 03:03:30.853132  948597 logs.go:282] 0 containers: []
	W0127 03:03:30.853144  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:03:30.853151  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:03:30.853229  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:03:30.891780  948597 cri.go:89] found id: ""
	I0127 03:03:30.891818  948597 logs.go:282] 0 containers: []
	W0127 03:03:30.891830  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:03:30.891846  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:03:30.891922  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:03:30.932208  948597 cri.go:89] found id: ""
	I0127 03:03:30.932237  948597 logs.go:282] 0 containers: []
	W0127 03:03:30.932249  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:03:30.932257  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:03:30.932324  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:03:30.969421  948597 cri.go:89] found id: ""
	I0127 03:03:30.969456  948597 logs.go:282] 0 containers: []
	W0127 03:03:30.969469  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:03:30.969478  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:03:30.969559  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:03:31.010915  948597 cri.go:89] found id: ""
	I0127 03:03:31.010947  948597 logs.go:282] 0 containers: []
	W0127 03:03:31.010956  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:03:31.010962  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:03:31.011026  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:03:31.047761  948597 cri.go:89] found id: ""
	I0127 03:03:31.047790  948597 logs.go:282] 0 containers: []
	W0127 03:03:31.047798  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:03:31.047805  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:03:31.047870  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:03:31.089616  948597 cri.go:89] found id: ""
	I0127 03:03:31.089655  948597 logs.go:282] 0 containers: []
	W0127 03:03:31.089667  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:03:31.089680  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:03:31.089698  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:03:31.146436  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:03:31.146478  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:03:31.161027  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:03:31.161070  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:03:31.229662  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:03:31.229689  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:03:31.229707  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:03:31.325500  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:03:31.325543  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:03:33.869074  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:03:33.883131  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:03:33.883194  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:03:33.917218  948597 cri.go:89] found id: ""
	I0127 03:03:33.917258  948597 logs.go:282] 0 containers: []
	W0127 03:03:33.917274  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:03:33.917282  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:03:33.917354  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:03:33.951537  948597 cri.go:89] found id: ""
	I0127 03:03:33.951567  948597 logs.go:282] 0 containers: []
	W0127 03:03:33.951576  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:03:33.951585  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:03:33.951639  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:03:33.985079  948597 cri.go:89] found id: ""
	I0127 03:03:33.985113  948597 logs.go:282] 0 containers: []
	W0127 03:03:33.985124  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:03:33.985132  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:03:33.985194  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:03:34.020082  948597 cri.go:89] found id: ""
	I0127 03:03:34.020115  948597 logs.go:282] 0 containers: []
	W0127 03:03:34.020127  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:03:34.020138  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:03:34.020209  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:03:34.053063  948597 cri.go:89] found id: ""
	I0127 03:03:34.053102  948597 logs.go:282] 0 containers: []
	W0127 03:03:34.053112  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:03:34.053120  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:03:34.053188  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:03:34.089480  948597 cri.go:89] found id: ""
	I0127 03:03:34.089517  948597 logs.go:282] 0 containers: []
	W0127 03:03:34.089527  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:03:34.089536  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:03:34.089613  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:03:34.123183  948597 cri.go:89] found id: ""
	I0127 03:03:34.123222  948597 logs.go:282] 0 containers: []
	W0127 03:03:34.123234  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:03:34.123245  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:03:34.123312  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:03:34.155327  948597 cri.go:89] found id: ""
	I0127 03:03:34.155361  948597 logs.go:282] 0 containers: []
	W0127 03:03:34.155372  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:03:34.155386  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:03:34.155403  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:03:34.205520  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:03:34.205565  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:03:34.220329  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:03:34.220363  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:03:34.286306  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:03:34.286329  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:03:34.286342  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:03:34.365937  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:03:34.365982  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:03:36.904802  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:03:36.918629  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:03:36.918702  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:03:36.955888  948597 cri.go:89] found id: ""
	I0127 03:03:36.955916  948597 logs.go:282] 0 containers: []
	W0127 03:03:36.955926  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:03:36.955934  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:03:36.955994  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:03:36.991775  948597 cri.go:89] found id: ""
	I0127 03:03:36.991813  948597 logs.go:282] 0 containers: []
	W0127 03:03:36.991825  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:03:36.991837  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:03:36.991907  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:03:37.022659  948597 cri.go:89] found id: ""
	I0127 03:03:37.022691  948597 logs.go:282] 0 containers: []
	W0127 03:03:37.022703  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:03:37.022711  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:03:37.022791  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:03:37.057855  948597 cri.go:89] found id: ""
	I0127 03:03:37.057880  948597 logs.go:282] 0 containers: []
	W0127 03:03:37.057890  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:03:37.057899  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:03:37.057955  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:03:37.105026  948597 cri.go:89] found id: ""
	I0127 03:03:37.105052  948597 logs.go:282] 0 containers: []
	W0127 03:03:37.105062  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:03:37.105069  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:03:37.105128  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:03:37.151319  948597 cri.go:89] found id: ""
	I0127 03:03:37.151342  948597 logs.go:282] 0 containers: []
	W0127 03:03:37.151349  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:03:37.151355  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:03:37.151402  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:03:37.186426  948597 cri.go:89] found id: ""
	I0127 03:03:37.186459  948597 logs.go:282] 0 containers: []
	W0127 03:03:37.186473  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:03:37.186482  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:03:37.186552  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:03:37.220942  948597 cri.go:89] found id: ""
	I0127 03:03:37.220977  948597 logs.go:282] 0 containers: []
	W0127 03:03:37.220988  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:03:37.221000  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:03:37.221016  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:03:37.293819  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:03:37.293875  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:03:37.310807  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:03:37.310848  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:03:37.407117  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:03:37.407155  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:03:37.407171  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:03:37.493357  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:03:37.493399  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:03:40.038711  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:03:40.050981  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:03:40.051059  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:03:40.086027  948597 cri.go:89] found id: ""
	I0127 03:03:40.086063  948597 logs.go:282] 0 containers: []
	W0127 03:03:40.086074  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:03:40.086083  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:03:40.086212  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:03:40.120434  948597 cri.go:89] found id: ""
	I0127 03:03:40.120475  948597 logs.go:282] 0 containers: []
	W0127 03:03:40.120487  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:03:40.120496  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:03:40.120571  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:03:40.159222  948597 cri.go:89] found id: ""
	I0127 03:03:40.159260  948597 logs.go:282] 0 containers: []
	W0127 03:03:40.159272  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:03:40.159280  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:03:40.159347  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:03:40.197538  948597 cri.go:89] found id: ""
	I0127 03:03:40.197572  948597 logs.go:282] 0 containers: []
	W0127 03:03:40.197583  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:03:40.197591  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:03:40.197663  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:03:40.232189  948597 cri.go:89] found id: ""
	I0127 03:03:40.232219  948597 logs.go:282] 0 containers: []
	W0127 03:03:40.232228  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:03:40.232236  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:03:40.232315  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:03:40.268148  948597 cri.go:89] found id: ""
	I0127 03:03:40.268179  948597 logs.go:282] 0 containers: []
	W0127 03:03:40.268190  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:03:40.268199  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:03:40.268259  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:03:40.304072  948597 cri.go:89] found id: ""
	I0127 03:03:40.304106  948597 logs.go:282] 0 containers: []
	W0127 03:03:40.304130  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:03:40.304136  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:03:40.304204  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:03:40.339120  948597 cri.go:89] found id: ""
	I0127 03:03:40.339172  948597 logs.go:282] 0 containers: []
	W0127 03:03:40.339184  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:03:40.339197  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:03:40.339215  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:03:40.393918  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:03:40.393966  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:03:40.407070  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:03:40.407106  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:03:40.483371  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:03:40.483403  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:03:40.483419  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:03:40.564819  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:03:40.564860  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:03:43.106464  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:03:43.121232  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:03:43.121331  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:03:43.171060  948597 cri.go:89] found id: ""
	I0127 03:03:43.171104  948597 logs.go:282] 0 containers: []
	W0127 03:03:43.171116  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:03:43.171124  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:03:43.171199  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:03:43.203877  948597 cri.go:89] found id: ""
	I0127 03:03:43.203909  948597 logs.go:282] 0 containers: []
	W0127 03:03:43.203918  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:03:43.203924  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:03:43.203981  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:03:43.237235  948597 cri.go:89] found id: ""
	I0127 03:03:43.237274  948597 logs.go:282] 0 containers: []
	W0127 03:03:43.237286  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:03:43.237294  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:03:43.237359  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:03:43.273399  948597 cri.go:89] found id: ""
	I0127 03:03:43.273426  948597 logs.go:282] 0 containers: []
	W0127 03:03:43.273435  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:03:43.273441  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:03:43.273494  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:03:43.311564  948597 cri.go:89] found id: ""
	I0127 03:03:43.311598  948597 logs.go:282] 0 containers: []
	W0127 03:03:43.311607  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:03:43.311614  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:03:43.311669  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:03:43.349751  948597 cri.go:89] found id: ""
	I0127 03:03:43.349789  948597 logs.go:282] 0 containers: []
	W0127 03:03:43.349800  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:03:43.349809  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:03:43.349874  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:03:43.382686  948597 cri.go:89] found id: ""
	I0127 03:03:43.382724  948597 logs.go:282] 0 containers: []
	W0127 03:03:43.382733  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:03:43.382739  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:03:43.382816  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:03:43.415446  948597 cri.go:89] found id: ""
	I0127 03:03:43.415479  948597 logs.go:282] 0 containers: []
	W0127 03:03:43.415488  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:03:43.415497  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:03:43.415511  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:03:43.465405  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:03:43.465444  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:03:43.479301  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:03:43.479334  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:03:43.552309  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:03:43.552339  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:03:43.552353  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:03:43.634482  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:03:43.634525  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:03:46.174122  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:03:46.186558  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:03:46.186621  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:03:46.217303  948597 cri.go:89] found id: ""
	I0127 03:03:46.217333  948597 logs.go:282] 0 containers: []
	W0127 03:03:46.217341  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:03:46.217348  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:03:46.217400  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:03:46.247949  948597 cri.go:89] found id: ""
	I0127 03:03:46.247981  948597 logs.go:282] 0 containers: []
	W0127 03:03:46.247991  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:03:46.247997  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:03:46.248049  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:03:46.282279  948597 cri.go:89] found id: ""
	I0127 03:03:46.282316  948597 logs.go:282] 0 containers: []
	W0127 03:03:46.282327  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:03:46.282335  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:03:46.282392  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:03:46.317285  948597 cri.go:89] found id: ""
	I0127 03:03:46.317322  948597 logs.go:282] 0 containers: []
	W0127 03:03:46.317331  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:03:46.317337  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:03:46.317401  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:03:46.352177  948597 cri.go:89] found id: ""
	I0127 03:03:46.352214  948597 logs.go:282] 0 containers: []
	W0127 03:03:46.352226  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:03:46.352234  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:03:46.352307  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:03:46.383047  948597 cri.go:89] found id: ""
	I0127 03:03:46.383085  948597 logs.go:282] 0 containers: []
	W0127 03:03:46.383093  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:03:46.383099  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:03:46.383153  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:03:46.416380  948597 cri.go:89] found id: ""
	I0127 03:03:46.416410  948597 logs.go:282] 0 containers: []
	W0127 03:03:46.416421  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:03:46.416430  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:03:46.416482  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:03:46.452736  948597 cri.go:89] found id: ""
	I0127 03:03:46.452765  948597 logs.go:282] 0 containers: []
	W0127 03:03:46.452773  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:03:46.452783  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:03:46.452800  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:03:46.502862  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:03:46.502902  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:03:46.517382  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:03:46.517417  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:03:46.586664  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:03:46.586688  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:03:46.586705  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:03:46.659182  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:03:46.659223  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:03:49.199681  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:03:49.214491  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:03:49.214575  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:03:49.252899  948597 cri.go:89] found id: ""
	I0127 03:03:49.252951  948597 logs.go:282] 0 containers: []
	W0127 03:03:49.252963  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:03:49.252972  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:03:49.253043  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:03:49.290868  948597 cri.go:89] found id: ""
	I0127 03:03:49.290901  948597 logs.go:282] 0 containers: []
	W0127 03:03:49.290914  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:03:49.290922  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:03:49.290980  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:03:49.333138  948597 cri.go:89] found id: ""
	I0127 03:03:49.333174  948597 logs.go:282] 0 containers: []
	W0127 03:03:49.333187  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:03:49.333194  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:03:49.333266  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:03:49.373836  948597 cri.go:89] found id: ""
	I0127 03:03:49.373874  948597 logs.go:282] 0 containers: []
	W0127 03:03:49.373882  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:03:49.373888  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:03:49.373973  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:03:49.410145  948597 cri.go:89] found id: ""
	I0127 03:03:49.410182  948597 logs.go:282] 0 containers: []
	W0127 03:03:49.410192  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:03:49.410198  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:03:49.410255  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:03:49.452337  948597 cri.go:89] found id: ""
	I0127 03:03:49.452376  948597 logs.go:282] 0 containers: []
	W0127 03:03:49.452388  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:03:49.452397  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:03:49.452467  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:03:49.489839  948597 cri.go:89] found id: ""
	I0127 03:03:49.489875  948597 logs.go:282] 0 containers: []
	W0127 03:03:49.489883  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:03:49.489889  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:03:49.489957  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:03:49.528030  948597 cri.go:89] found id: ""
	I0127 03:03:49.528059  948597 logs.go:282] 0 containers: []
	W0127 03:03:49.528067  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:03:49.528077  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:03:49.528091  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:03:49.582640  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:03:49.582681  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:03:49.608985  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:03:49.609025  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:03:49.708573  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:03:49.708603  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:03:49.708622  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:03:49.789748  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:03:49.789800  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:03:52.327436  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:03:52.340369  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:03:52.340450  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:03:52.373847  948597 cri.go:89] found id: ""
	I0127 03:03:52.373883  948597 logs.go:282] 0 containers: []
	W0127 03:03:52.373895  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:03:52.373903  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:03:52.373981  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:03:52.406843  948597 cri.go:89] found id: ""
	I0127 03:03:52.406886  948597 logs.go:282] 0 containers: []
	W0127 03:03:52.406897  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:03:52.406905  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:03:52.406980  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:03:52.439897  948597 cri.go:89] found id: ""
	I0127 03:03:52.439931  948597 logs.go:282] 0 containers: []
	W0127 03:03:52.439943  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:03:52.439951  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:03:52.440014  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:03:52.473347  948597 cri.go:89] found id: ""
	I0127 03:03:52.473388  948597 logs.go:282] 0 containers: []
	W0127 03:03:52.473406  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:03:52.473416  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:03:52.473485  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:03:52.509312  948597 cri.go:89] found id: ""
	I0127 03:03:52.509343  948597 logs.go:282] 0 containers: []
	W0127 03:03:52.509362  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:03:52.509370  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:03:52.509442  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:03:52.542529  948597 cri.go:89] found id: ""
	I0127 03:03:52.542562  948597 logs.go:282] 0 containers: []
	W0127 03:03:52.542573  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:03:52.542582  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:03:52.542654  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:03:52.575340  948597 cri.go:89] found id: ""
	I0127 03:03:52.575367  948597 logs.go:282] 0 containers: []
	W0127 03:03:52.575375  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:03:52.575381  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:03:52.575435  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:03:52.608431  948597 cri.go:89] found id: ""
	I0127 03:03:52.608468  948597 logs.go:282] 0 containers: []
	W0127 03:03:52.608479  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:03:52.608492  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:03:52.608508  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:03:52.661894  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:03:52.661940  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:03:52.675310  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:03:52.675364  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:03:52.746395  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:03:52.746443  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:03:52.746462  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:03:52.828155  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:03:52.828206  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:03:55.374498  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:03:55.387854  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:03:55.387941  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:03:55.425806  948597 cri.go:89] found id: ""
	I0127 03:03:55.425855  948597 logs.go:282] 0 containers: []
	W0127 03:03:55.425864  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:03:55.425870  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:03:55.425923  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:03:55.461410  948597 cri.go:89] found id: ""
	I0127 03:03:55.461444  948597 logs.go:282] 0 containers: []
	W0127 03:03:55.461456  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:03:55.461465  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:03:55.461531  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:03:55.499078  948597 cri.go:89] found id: ""
	I0127 03:03:55.499113  948597 logs.go:282] 0 containers: []
	W0127 03:03:55.499124  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:03:55.499133  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:03:55.499206  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:03:55.537937  948597 cri.go:89] found id: ""
	I0127 03:03:55.537975  948597 logs.go:282] 0 containers: []
	W0127 03:03:55.537984  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:03:55.537991  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:03:55.538056  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:03:55.569681  948597 cri.go:89] found id: ""
	I0127 03:03:55.569721  948597 logs.go:282] 0 containers: []
	W0127 03:03:55.569731  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:03:55.569741  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:03:55.569811  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:03:55.603344  948597 cri.go:89] found id: ""
	I0127 03:03:55.603380  948597 logs.go:282] 0 containers: []
	W0127 03:03:55.603392  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:03:55.603400  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:03:55.603465  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:03:55.639997  948597 cri.go:89] found id: ""
	I0127 03:03:55.640029  948597 logs.go:282] 0 containers: []
	W0127 03:03:55.640037  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:03:55.640044  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:03:55.640102  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:03:55.675648  948597 cri.go:89] found id: ""
	I0127 03:03:55.675689  948597 logs.go:282] 0 containers: []
	W0127 03:03:55.675703  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:03:55.675716  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:03:55.675733  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:03:55.743163  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:03:55.743196  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:03:55.743214  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:03:55.822828  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:03:55.822870  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:03:55.859306  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:03:55.859340  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:03:55.909787  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:03:55.909835  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:03:58.426284  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:03:58.439993  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:03:58.440151  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:03:58.474547  948597 cri.go:89] found id: ""
	I0127 03:03:58.474576  948597 logs.go:282] 0 containers: []
	W0127 03:03:58.474584  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:03:58.474591  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:03:58.474645  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:03:58.512401  948597 cri.go:89] found id: ""
	I0127 03:03:58.512437  948597 logs.go:282] 0 containers: []
	W0127 03:03:58.512449  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:03:58.512462  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:03:58.512534  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:03:58.547367  948597 cri.go:89] found id: ""
	I0127 03:03:58.547405  948597 logs.go:282] 0 containers: []
	W0127 03:03:58.547415  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:03:58.547424  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:03:58.547490  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:03:58.583663  948597 cri.go:89] found id: ""
	I0127 03:03:58.583694  948597 logs.go:282] 0 containers: []
	W0127 03:03:58.583704  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:03:58.583712  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:03:58.583803  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:03:58.615715  948597 cri.go:89] found id: ""
	I0127 03:03:58.615747  948597 logs.go:282] 0 containers: []
	W0127 03:03:58.615758  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:03:58.615768  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:03:58.615828  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:03:58.654269  948597 cri.go:89] found id: ""
	I0127 03:03:58.654304  948597 logs.go:282] 0 containers: []
	W0127 03:03:58.654322  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:03:58.654331  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:03:58.654406  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:03:58.691337  948597 cri.go:89] found id: ""
	I0127 03:03:58.691369  948597 logs.go:282] 0 containers: []
	W0127 03:03:58.691378  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:03:58.691384  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:03:58.691441  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:03:58.730699  948597 cri.go:89] found id: ""
	I0127 03:03:58.730733  948597 logs.go:282] 0 containers: []
	W0127 03:03:58.730743  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:03:58.730756  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:03:58.730771  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:03:58.767458  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:03:58.767490  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:03:58.818831  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:03:58.818873  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:03:58.832848  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:03:58.832887  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:03:58.898382  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:03:58.898407  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:03:58.898424  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:04:01.479880  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:04:01.492851  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:04:01.492954  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:04:01.535445  948597 cri.go:89] found id: ""
	I0127 03:04:01.535474  948597 logs.go:282] 0 containers: []
	W0127 03:04:01.535488  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:04:01.535496  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:04:01.535570  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:04:01.569041  948597 cri.go:89] found id: ""
	I0127 03:04:01.569076  948597 logs.go:282] 0 containers: []
	W0127 03:04:01.569096  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:04:01.569105  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:04:01.569175  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:04:01.602966  948597 cri.go:89] found id: ""
	I0127 03:04:01.603001  948597 logs.go:282] 0 containers: []
	W0127 03:04:01.603012  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:04:01.603020  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:04:01.603092  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:04:01.639788  948597 cri.go:89] found id: ""
	I0127 03:04:01.639828  948597 logs.go:282] 0 containers: []
	W0127 03:04:01.639840  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:04:01.639849  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:04:01.639932  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:04:01.671737  948597 cri.go:89] found id: ""
	I0127 03:04:01.671780  948597 logs.go:282] 0 containers: []
	W0127 03:04:01.671792  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:04:01.671800  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:04:01.671862  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:04:01.704394  948597 cri.go:89] found id: ""
	I0127 03:04:01.704435  948597 logs.go:282] 0 containers: []
	W0127 03:04:01.704448  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:04:01.704464  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:04:01.704530  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:04:01.737350  948597 cri.go:89] found id: ""
	I0127 03:04:01.737390  948597 logs.go:282] 0 containers: []
	W0127 03:04:01.737402  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:04:01.737410  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:04:01.737478  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:04:01.770680  948597 cri.go:89] found id: ""
	I0127 03:04:01.770717  948597 logs.go:282] 0 containers: []
	W0127 03:04:01.770727  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:04:01.770739  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:04:01.770751  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:04:01.824973  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:04:01.825026  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:04:01.837855  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:04:01.837889  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:04:01.909142  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:04:01.909174  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:04:01.909192  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:04:01.988107  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:04:01.988159  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:04:04.530582  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:04:04.545497  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:04:04.545589  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:04:04.578544  948597 cri.go:89] found id: ""
	I0127 03:04:04.578575  948597 logs.go:282] 0 containers: []
	W0127 03:04:04.578586  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:04:04.578594  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:04:04.578665  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:04:04.610171  948597 cri.go:89] found id: ""
	I0127 03:04:04.610204  948597 logs.go:282] 0 containers: []
	W0127 03:04:04.610214  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:04:04.610222  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:04:04.610295  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:04:04.642472  948597 cri.go:89] found id: ""
	I0127 03:04:04.642509  948597 logs.go:282] 0 containers: []
	W0127 03:04:04.642518  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:04:04.642524  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:04:04.642587  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:04:04.676586  948597 cri.go:89] found id: ""
	I0127 03:04:04.676621  948597 logs.go:282] 0 containers: []
	W0127 03:04:04.676632  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:04:04.676640  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:04:04.676710  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:04:04.714191  948597 cri.go:89] found id: ""
	I0127 03:04:04.714227  948597 logs.go:282] 0 containers: []
	W0127 03:04:04.714235  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:04:04.714242  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:04:04.714306  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:04:04.752352  948597 cri.go:89] found id: ""
	I0127 03:04:04.752384  948597 logs.go:282] 0 containers: []
	W0127 03:04:04.752392  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:04:04.752399  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:04:04.752480  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:04:04.786272  948597 cri.go:89] found id: ""
	I0127 03:04:04.786310  948597 logs.go:282] 0 containers: []
	W0127 03:04:04.786320  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:04:04.786326  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:04:04.786381  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:04:04.819721  948597 cri.go:89] found id: ""
	I0127 03:04:04.819756  948597 logs.go:282] 0 containers: []
	W0127 03:04:04.819767  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:04:04.819781  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:04:04.819797  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:04:04.870116  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:04:04.870158  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:04:04.884194  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:04:04.884224  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:04:04.953746  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:04:04.953777  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:04:04.953794  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:04:05.030733  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:04:05.030781  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:04:07.570182  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:04:07.584446  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:04:07.584530  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:04:07.621731  948597 cri.go:89] found id: ""
	I0127 03:04:07.621763  948597 logs.go:282] 0 containers: []
	W0127 03:04:07.621771  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:04:07.621778  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:04:07.621847  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:04:07.654036  948597 cri.go:89] found id: ""
	I0127 03:04:07.654069  948597 logs.go:282] 0 containers: []
	W0127 03:04:07.654078  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:04:07.654093  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:04:07.654165  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:04:07.688261  948597 cri.go:89] found id: ""
	I0127 03:04:07.688292  948597 logs.go:282] 0 containers: []
	W0127 03:04:07.688299  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:04:07.688306  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:04:07.688366  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:04:07.722139  948597 cri.go:89] found id: ""
	I0127 03:04:07.722172  948597 logs.go:282] 0 containers: []
	W0127 03:04:07.722184  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:04:07.722192  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:04:07.722261  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:04:07.757871  948597 cri.go:89] found id: ""
	I0127 03:04:07.757914  948597 logs.go:282] 0 containers: []
	W0127 03:04:07.757926  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:04:07.757935  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:04:07.758012  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:04:07.794552  948597 cri.go:89] found id: ""
	I0127 03:04:07.794590  948597 logs.go:282] 0 containers: []
	W0127 03:04:07.794601  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:04:07.794615  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:04:07.794688  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:04:07.828557  948597 cri.go:89] found id: ""
	I0127 03:04:07.828586  948597 logs.go:282] 0 containers: []
	W0127 03:04:07.828594  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:04:07.828600  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:04:07.828670  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:04:07.860545  948597 cri.go:89] found id: ""
	I0127 03:04:07.860581  948597 logs.go:282] 0 containers: []
	W0127 03:04:07.860593  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:04:07.860607  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:04:07.860629  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:04:07.930137  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:04:07.930183  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:04:07.930201  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:04:08.007159  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:04:08.007204  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:04:08.047033  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:04:08.047068  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:04:08.102699  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:04:08.102748  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:04:10.617189  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:04:10.630132  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:04:10.630200  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:04:10.664116  948597 cri.go:89] found id: ""
	I0127 03:04:10.664152  948597 logs.go:282] 0 containers: []
	W0127 03:04:10.664164  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:04:10.664172  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:04:10.664242  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:04:10.698576  948597 cri.go:89] found id: ""
	I0127 03:04:10.698607  948597 logs.go:282] 0 containers: []
	W0127 03:04:10.698615  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:04:10.698621  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:04:10.698675  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:04:10.734799  948597 cri.go:89] found id: ""
	I0127 03:04:10.734830  948597 logs.go:282] 0 containers: []
	W0127 03:04:10.734842  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:04:10.734850  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:04:10.734927  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:04:10.767540  948597 cri.go:89] found id: ""
	I0127 03:04:10.767569  948597 logs.go:282] 0 containers: []
	W0127 03:04:10.767577  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:04:10.767584  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:04:10.767638  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:04:10.807593  948597 cri.go:89] found id: ""
	I0127 03:04:10.807629  948597 logs.go:282] 0 containers: []
	W0127 03:04:10.807640  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:04:10.807649  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:04:10.807727  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:04:10.839121  948597 cri.go:89] found id: ""
	I0127 03:04:10.839159  948597 logs.go:282] 0 containers: []
	W0127 03:04:10.839170  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:04:10.839179  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:04:10.839241  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:04:10.872851  948597 cri.go:89] found id: ""
	I0127 03:04:10.872885  948597 logs.go:282] 0 containers: []
	W0127 03:04:10.872896  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:04:10.872906  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:04:10.872999  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:04:10.910531  948597 cri.go:89] found id: ""
	I0127 03:04:10.910569  948597 logs.go:282] 0 containers: []
	W0127 03:04:10.910580  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:04:10.910596  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:04:10.910613  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:04:10.993523  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:04:10.993570  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:04:11.033788  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:04:11.033818  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:04:11.087677  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:04:11.087721  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:04:11.103078  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:04:11.103109  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:04:11.175582  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:04:13.676472  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:04:13.689467  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:04:13.689537  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:04:13.727237  948597 cri.go:89] found id: ""
	I0127 03:04:13.727265  948597 logs.go:282] 0 containers: []
	W0127 03:04:13.727273  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:04:13.727279  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:04:13.727328  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:04:13.763127  948597 cri.go:89] found id: ""
	I0127 03:04:13.763158  948597 logs.go:282] 0 containers: []
	W0127 03:04:13.763166  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:04:13.763171  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:04:13.763222  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:04:13.794647  948597 cri.go:89] found id: ""
	I0127 03:04:13.794675  948597 logs.go:282] 0 containers: []
	W0127 03:04:13.794683  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:04:13.794689  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:04:13.794740  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:04:13.827523  948597 cri.go:89] found id: ""
	I0127 03:04:13.827559  948597 logs.go:282] 0 containers: []
	W0127 03:04:13.827570  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:04:13.827578  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:04:13.827651  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:04:13.859085  948597 cri.go:89] found id: ""
	I0127 03:04:13.859122  948597 logs.go:282] 0 containers: []
	W0127 03:04:13.859134  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:04:13.859143  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:04:13.859201  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:04:13.895325  948597 cri.go:89] found id: ""
	I0127 03:04:13.895353  948597 logs.go:282] 0 containers: []
	W0127 03:04:13.895360  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:04:13.895367  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:04:13.895421  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:04:13.933592  948597 cri.go:89] found id: ""
	I0127 03:04:13.933625  948597 logs.go:282] 0 containers: []
	W0127 03:04:13.933635  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:04:13.933642  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:04:13.933706  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:04:13.966739  948597 cri.go:89] found id: ""
	I0127 03:04:13.966792  948597 logs.go:282] 0 containers: []
	W0127 03:04:13.966805  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:04:13.966822  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:04:13.966839  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:04:14.040424  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:04:14.040467  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:04:14.077156  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:04:14.077184  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:04:14.132748  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:04:14.132808  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:04:14.146563  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:04:14.146595  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:04:14.215693  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:04:16.717073  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:04:16.729638  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:04:16.729715  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:04:16.768346  948597 cri.go:89] found id: ""
	I0127 03:04:16.768385  948597 logs.go:282] 0 containers: []
	W0127 03:04:16.768398  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:04:16.768407  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:04:16.768473  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:04:16.804431  948597 cri.go:89] found id: ""
	I0127 03:04:16.804461  948597 logs.go:282] 0 containers: []
	W0127 03:04:16.804470  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:04:16.804476  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:04:16.804526  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:04:16.839115  948597 cri.go:89] found id: ""
	I0127 03:04:16.839143  948597 logs.go:282] 0 containers: []
	W0127 03:04:16.839151  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:04:16.839156  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:04:16.839212  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:04:16.873335  948597 cri.go:89] found id: ""
	I0127 03:04:16.873364  948597 logs.go:282] 0 containers: []
	W0127 03:04:16.873372  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:04:16.873380  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:04:16.873432  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:04:16.910286  948597 cri.go:89] found id: ""
	I0127 03:04:16.910332  948597 logs.go:282] 0 containers: []
	W0127 03:04:16.910345  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:04:16.910353  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:04:16.910423  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:04:16.946551  948597 cri.go:89] found id: ""
	I0127 03:04:16.946589  948597 logs.go:282] 0 containers: []
	W0127 03:04:16.946600  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:04:16.946609  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:04:16.946672  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:04:16.978047  948597 cri.go:89] found id: ""
	I0127 03:04:16.978083  948597 logs.go:282] 0 containers: []
	W0127 03:04:16.978094  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:04:16.978102  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:04:16.978176  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:04:17.013720  948597 cri.go:89] found id: ""
	I0127 03:04:17.013761  948597 logs.go:282] 0 containers: []
	W0127 03:04:17.013774  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:04:17.013788  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:04:17.013807  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:04:17.026513  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:04:17.026547  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:04:17.096011  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:04:17.096038  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:04:17.096054  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:04:17.174404  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:04:17.174448  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:04:17.216806  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:04:17.216844  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:04:19.767388  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:04:19.780137  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:04:19.780207  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:04:19.814660  948597 cri.go:89] found id: ""
	I0127 03:04:19.814698  948597 logs.go:282] 0 containers: []
	W0127 03:04:19.814710  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:04:19.814718  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:04:19.814791  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:04:19.850990  948597 cri.go:89] found id: ""
	I0127 03:04:19.851020  948597 logs.go:282] 0 containers: []
	W0127 03:04:19.851029  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:04:19.851035  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:04:19.851090  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:04:19.887644  948597 cri.go:89] found id: ""
	I0127 03:04:19.887682  948597 logs.go:282] 0 containers: []
	W0127 03:04:19.887693  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:04:19.887701  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:04:19.887767  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:04:19.921419  948597 cri.go:89] found id: ""
	I0127 03:04:19.921453  948597 logs.go:282] 0 containers: []
	W0127 03:04:19.921469  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:04:19.921478  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:04:19.921536  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:04:19.959616  948597 cri.go:89] found id: ""
	I0127 03:04:19.959648  948597 logs.go:282] 0 containers: []
	W0127 03:04:19.959657  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:04:19.959663  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:04:19.959734  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:04:19.997571  948597 cri.go:89] found id: ""
	I0127 03:04:19.997601  948597 logs.go:282] 0 containers: []
	W0127 03:04:19.997609  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:04:19.997615  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:04:19.997668  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:04:20.031386  948597 cri.go:89] found id: ""
	I0127 03:04:20.031415  948597 logs.go:282] 0 containers: []
	W0127 03:04:20.031426  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:04:20.031435  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:04:20.031504  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:04:20.067334  948597 cri.go:89] found id: ""
	I0127 03:04:20.067363  948597 logs.go:282] 0 containers: []
	W0127 03:04:20.067371  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:04:20.067381  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:04:20.067395  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:04:20.120601  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:04:20.120643  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:04:20.134312  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:04:20.134347  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:04:20.198629  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:04:20.198660  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:04:20.198684  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:04:20.274085  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:04:20.274125  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:04:22.813144  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:04:22.826000  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:04:22.826087  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:04:22.862787  948597 cri.go:89] found id: ""
	I0127 03:04:22.862819  948597 logs.go:282] 0 containers: []
	W0127 03:04:22.862828  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:04:22.862835  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:04:22.862891  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:04:22.898684  948597 cri.go:89] found id: ""
	I0127 03:04:22.898723  948597 logs.go:282] 0 containers: []
	W0127 03:04:22.898735  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:04:22.898743  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:04:22.898811  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:04:22.934201  948597 cri.go:89] found id: ""
	I0127 03:04:22.934240  948597 logs.go:282] 0 containers: []
	W0127 03:04:22.934251  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:04:22.934256  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:04:22.934318  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:04:22.965200  948597 cri.go:89] found id: ""
	I0127 03:04:22.965229  948597 logs.go:282] 0 containers: []
	W0127 03:04:22.965240  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:04:22.965249  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:04:22.965313  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:04:22.998751  948597 cri.go:89] found id: ""
	I0127 03:04:22.998787  948597 logs.go:282] 0 containers: []
	W0127 03:04:22.998818  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:04:22.998828  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:04:22.998888  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:04:23.034164  948597 cri.go:89] found id: ""
	I0127 03:04:23.034204  948597 logs.go:282] 0 containers: []
	W0127 03:04:23.034214  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:04:23.034220  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:04:23.034282  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:04:23.067890  948597 cri.go:89] found id: ""
	I0127 03:04:23.067921  948597 logs.go:282] 0 containers: []
	W0127 03:04:23.067931  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:04:23.067937  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:04:23.067998  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:04:23.103103  948597 cri.go:89] found id: ""
	I0127 03:04:23.103140  948597 logs.go:282] 0 containers: []
	W0127 03:04:23.103148  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:04:23.103158  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:04:23.103171  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:04:23.173697  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:04:23.173730  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:04:23.173747  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:04:23.256183  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:04:23.256226  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:04:23.295542  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:04:23.295578  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:04:23.347155  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:04:23.347198  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:04:25.861276  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:04:25.874470  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:04:25.874559  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:04:25.919600  948597 cri.go:89] found id: ""
	I0127 03:04:25.919649  948597 logs.go:282] 0 containers: []
	W0127 03:04:25.919660  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:04:25.919668  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:04:25.919750  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:04:25.962661  948597 cri.go:89] found id: ""
	I0127 03:04:25.962690  948597 logs.go:282] 0 containers: []
	W0127 03:04:25.962700  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:04:25.962710  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:04:25.962777  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:04:26.005967  948597 cri.go:89] found id: ""
	I0127 03:04:26.005997  948597 logs.go:282] 0 containers: []
	W0127 03:04:26.006007  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:04:26.006015  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:04:26.006085  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:04:26.046033  948597 cri.go:89] found id: ""
	I0127 03:04:26.046070  948597 logs.go:282] 0 containers: []
	W0127 03:04:26.046091  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:04:26.046100  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:04:26.046177  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:04:26.096243  948597 cri.go:89] found id: ""
	I0127 03:04:26.096275  948597 logs.go:282] 0 containers: []
	W0127 03:04:26.096286  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:04:26.096294  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:04:26.096348  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:04:26.142925  948597 cri.go:89] found id: ""
	I0127 03:04:26.142963  948597 logs.go:282] 0 containers: []
	W0127 03:04:26.142975  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:04:26.142983  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:04:26.143063  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:04:26.189994  948597 cri.go:89] found id: ""
	I0127 03:04:26.190030  948597 logs.go:282] 0 containers: []
	W0127 03:04:26.190041  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:04:26.190049  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:04:26.190120  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:04:26.230196  948597 cri.go:89] found id: ""
	I0127 03:04:26.230235  948597 logs.go:282] 0 containers: []
	W0127 03:04:26.230247  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:04:26.230259  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:04:26.230286  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:04:26.270693  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:04:26.270723  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:04:26.321198  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:04:26.321238  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:04:26.335043  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:04:26.335076  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:04:26.400918  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:04:26.400959  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:04:26.400978  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:04:28.980169  948597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:04:28.993297  948597 kubeadm.go:597] duration metric: took 4m4.228473717s to restartPrimaryControlPlane
	W0127 03:04:28.993398  948597 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 03:04:28.993426  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 03:04:29.445581  948597 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 03:04:29.460453  948597 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 03:04:29.469928  948597 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 03:04:29.480343  948597 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 03:04:29.480367  948597 kubeadm.go:157] found existing configuration files:
	
	I0127 03:04:29.480422  948597 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 03:04:29.489067  948597 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 03:04:29.489141  948597 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 03:04:29.498734  948597 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 03:04:29.507613  948597 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 03:04:29.507671  948597 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 03:04:29.516604  948597 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 03:04:29.525243  948597 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 03:04:29.525323  948597 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 03:04:29.534157  948597 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 03:04:29.543129  948597 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 03:04:29.543188  948597 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 03:04:29.552687  948597 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 03:04:29.624705  948597 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 03:04:29.624804  948597 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 03:04:29.772324  948597 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 03:04:29.772478  948597 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 03:04:29.772640  948597 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 03:04:29.954307  948597 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 03:04:29.956825  948597 out.go:235]   - Generating certificates and keys ...
	I0127 03:04:29.956988  948597 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 03:04:29.957119  948597 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 03:04:29.957247  948597 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 03:04:29.957335  948597 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 03:04:29.957469  948597 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 03:04:29.957572  948597 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 03:04:29.957680  948597 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 03:04:29.957771  948597 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 03:04:29.957946  948597 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 03:04:29.958348  948597 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 03:04:29.958447  948597 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 03:04:29.958534  948597 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 03:04:30.097111  948597 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 03:04:30.222482  948597 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 03:04:30.408102  948597 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 03:04:30.852134  948597 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 03:04:30.876258  948597 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 03:04:30.876415  948597 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 03:04:30.876489  948597 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 03:04:31.024603  948597 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 03:04:31.027020  948597 out.go:235]   - Booting up control plane ...
	I0127 03:04:31.027166  948597 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 03:04:31.031390  948597 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 03:04:31.032286  948597 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 03:04:31.035572  948597 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 03:04:31.037860  948597 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 03:05:11.038148  948597 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 03:05:11.038721  948597 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 03:05:11.038967  948597 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 03:05:16.039637  948597 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 03:05:16.039883  948597 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 03:05:26.040326  948597 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 03:05:26.040554  948597 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 03:05:46.041233  948597 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 03:05:46.041449  948597 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 03:06:26.043310  948597 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 03:06:26.043543  948597 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 03:06:26.043556  948597 kubeadm.go:310] 
	I0127 03:06:26.043618  948597 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 03:06:26.043683  948597 kubeadm.go:310] 		timed out waiting for the condition
	I0127 03:06:26.043694  948597 kubeadm.go:310] 
	I0127 03:06:26.043729  948597 kubeadm.go:310] 	This error is likely caused by:
	I0127 03:06:26.043809  948597 kubeadm.go:310] 		- The kubelet is not running
	I0127 03:06:26.043992  948597 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 03:06:26.044018  948597 kubeadm.go:310] 
	I0127 03:06:26.044163  948597 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 03:06:26.044214  948597 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 03:06:26.044269  948597 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 03:06:26.044285  948597 kubeadm.go:310] 
	I0127 03:06:26.044431  948597 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 03:06:26.044554  948597 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 03:06:26.044564  948597 kubeadm.go:310] 
	I0127 03:06:26.044709  948597 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 03:06:26.044838  948597 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 03:06:26.044981  948597 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 03:06:26.045084  948597 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 03:06:26.045095  948597 kubeadm.go:310] 
	I0127 03:06:26.045538  948597 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 03:06:26.045671  948597 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 03:06:26.045742  948597 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0127 03:06:26.045889  948597 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0127 03:06:26.045928  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 03:06:26.517153  948597 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 03:06:26.538095  948597 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 03:06:26.550255  948597 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 03:06:26.550281  948597 kubeadm.go:157] found existing configuration files:
	
	I0127 03:06:26.550340  948597 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 03:06:26.561709  948597 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 03:06:26.561789  948597 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 03:06:26.575890  948597 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 03:06:26.588090  948597 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 03:06:26.588181  948597 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 03:06:26.598990  948597 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 03:06:26.612033  948597 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 03:06:26.612123  948597 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 03:06:26.622573  948597 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 03:06:26.631929  948597 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 03:06:26.632021  948597 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 03:06:26.642806  948597 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 03:06:26.739328  948597 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 03:06:26.739480  948597 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 03:06:26.913070  948597 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 03:06:26.913232  948597 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 03:06:26.913343  948597 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 03:06:27.162200  948597 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 03:06:27.163992  948597 out.go:235]   - Generating certificates and keys ...
	I0127 03:06:27.164100  948597 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 03:06:27.164192  948597 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 03:06:27.164307  948597 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 03:06:27.164411  948597 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 03:06:27.164526  948597 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 03:06:27.164594  948597 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 03:06:27.164737  948597 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 03:06:27.165278  948597 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 03:06:27.165703  948597 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 03:06:27.165968  948597 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 03:06:27.166096  948597 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 03:06:27.166178  948597 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 03:06:27.376323  948597 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 03:06:27.654150  948597 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 03:06:27.749976  948597 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 03:06:28.118593  948597 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 03:06:28.143571  948597 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 03:06:28.144278  948597 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 03:06:28.144411  948597 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 03:06:28.290524  948597 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 03:06:28.292290  948597 out.go:235]   - Booting up control plane ...
	I0127 03:06:28.292428  948597 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 03:06:28.292530  948597 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 03:06:28.294089  948597 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 03:06:28.295639  948597 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 03:06:28.303421  948597 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 03:07:08.305483  948597 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 03:07:08.305780  948597 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 03:07:08.306049  948597 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 03:07:13.307137  948597 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 03:07:13.307353  948597 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 03:07:23.308165  948597 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 03:07:23.308478  948597 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 03:07:43.309450  948597 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 03:07:43.309768  948597 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 03:08:23.310834  948597 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 03:08:23.311157  948597 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 03:08:23.311197  948597 kubeadm.go:310] 
	I0127 03:08:23.311268  948597 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 03:08:23.311329  948597 kubeadm.go:310] 		timed out waiting for the condition
	I0127 03:08:23.311340  948597 kubeadm.go:310] 
	I0127 03:08:23.311389  948597 kubeadm.go:310] 	This error is likely caused by:
	I0127 03:08:23.311462  948597 kubeadm.go:310] 		- The kubelet is not running
	I0127 03:08:23.311810  948597 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 03:08:23.311831  948597 kubeadm.go:310] 
	I0127 03:08:23.312005  948597 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 03:08:23.312067  948597 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 03:08:23.312119  948597 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 03:08:23.312130  948597 kubeadm.go:310] 
	I0127 03:08:23.312273  948597 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 03:08:23.312352  948597 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 03:08:23.312366  948597 kubeadm.go:310] 
	I0127 03:08:23.312514  948597 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 03:08:23.312631  948597 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 03:08:23.312738  948597 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 03:08:23.312844  948597 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 03:08:23.312858  948597 kubeadm.go:310] 
	I0127 03:08:23.313323  948597 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 03:08:23.313454  948597 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 03:08:23.313565  948597 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0127 03:08:23.313622  948597 kubeadm.go:394] duration metric: took 7m58.597329739s to StartCluster
	I0127 03:08:23.313680  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:08:23.313755  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:08:23.367103  948597 cri.go:89] found id: ""
	I0127 03:08:23.367143  948597 logs.go:282] 0 containers: []
	W0127 03:08:23.367157  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:08:23.367165  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:08:23.367244  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:08:23.405957  948597 cri.go:89] found id: ""
	I0127 03:08:23.406002  948597 logs.go:282] 0 containers: []
	W0127 03:08:23.406013  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:08:23.406021  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:08:23.406087  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:08:23.440876  948597 cri.go:89] found id: ""
	I0127 03:08:23.440944  948597 logs.go:282] 0 containers: []
	W0127 03:08:23.440960  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:08:23.440973  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:08:23.441059  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:08:23.476013  948597 cri.go:89] found id: ""
	I0127 03:08:23.476054  948597 logs.go:282] 0 containers: []
	W0127 03:08:23.476066  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:08:23.476075  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:08:23.476144  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:08:23.517449  948597 cri.go:89] found id: ""
	I0127 03:08:23.517486  948597 logs.go:282] 0 containers: []
	W0127 03:08:23.517498  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:08:23.517506  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:08:23.517572  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:08:23.561689  948597 cri.go:89] found id: ""
	I0127 03:08:23.561730  948597 logs.go:282] 0 containers: []
	W0127 03:08:23.561743  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:08:23.561752  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:08:23.561807  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:08:23.621509  948597 cri.go:89] found id: ""
	I0127 03:08:23.621555  948597 logs.go:282] 0 containers: []
	W0127 03:08:23.621567  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:08:23.621576  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:08:23.621658  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:08:23.690361  948597 cri.go:89] found id: ""
	I0127 03:08:23.690399  948597 logs.go:282] 0 containers: []
	W0127 03:08:23.690411  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:08:23.690424  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:08:23.690451  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:08:23.710302  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:08:23.710345  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:08:23.816241  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:08:23.816269  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:08:23.816283  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:08:23.931831  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:08:23.931883  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:08:23.974187  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:08:23.974224  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0127 03:08:24.036237  948597 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0127 03:08:24.036324  948597 out.go:270] * 
	* 
	W0127 03:08:24.036416  948597 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 03:08:24.036440  948597 out.go:270] * 
	* 
	W0127 03:08:24.037735  948597 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0127 03:08:24.041238  948597 out.go:201] 
	W0127 03:08:24.042360  948597 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 03:08:24.042425  948597 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0127 03:08:24.042454  948597 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0127 03:08:24.043803  948597 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-542356 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-542356 -n old-k8s-version-542356
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-542356 -n old-k8s-version-542356: exit status 2 (247.544423ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-542356 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p embed-certs-896179                 | embed-certs-896179           | jenkins | v1.35.0 | 27 Jan 25 03:01 UTC | 27 Jan 25 03:01 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-896179                                  | embed-certs-896179           | jenkins | v1.35.0 | 27 Jan 25 03:01 UTC | 27 Jan 25 03:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-080871                           | kubernetes-upgrade-080871    | jenkins | v1.35.0 | 27 Jan 25 03:02 UTC | 27 Jan 25 03:02 UTC |
	| delete  | -p                                                     | disable-driver-mounts-113637 | jenkins | v1.35.0 | 27 Jan 25 03:02 UTC | 27 Jan 25 03:02 UTC |
	|         | disable-driver-mounts-113637                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-150897 | jenkins | v1.35.0 | 27 Jan 25 03:02 UTC | 27 Jan 25 03:04 UTC |
	|         | default-k8s-diff-port-150897                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-150897  | default-k8s-diff-port-150897 | jenkins | v1.35.0 | 27 Jan 25 03:04 UTC | 27 Jan 25 03:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-150897 | jenkins | v1.35.0 | 27 Jan 25 03:04 UTC | 27 Jan 25 03:05 UTC |
	|         | default-k8s-diff-port-150897                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-150897       | default-k8s-diff-port-150897 | jenkins | v1.35.0 | 27 Jan 25 03:05 UTC | 27 Jan 25 03:05 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-150897 | jenkins | v1.35.0 | 27 Jan 25 03:05 UTC |                     |
	|         | default-k8s-diff-port-150897                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| image   | embed-certs-896179 image list                          | embed-certs-896179           | jenkins | v1.35.0 | 27 Jan 25 03:06 UTC | 27 Jan 25 03:06 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-896179                                  | embed-certs-896179           | jenkins | v1.35.0 | 27 Jan 25 03:06 UTC | 27 Jan 25 03:06 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-896179                                  | embed-certs-896179           | jenkins | v1.35.0 | 27 Jan 25 03:06 UTC | 27 Jan 25 03:06 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-896179                                  | embed-certs-896179           | jenkins | v1.35.0 | 27 Jan 25 03:06 UTC | 27 Jan 25 03:06 UTC |
	| delete  | -p embed-certs-896179                                  | embed-certs-896179           | jenkins | v1.35.0 | 27 Jan 25 03:06 UTC | 27 Jan 25 03:06 UTC |
	| start   | -p newest-cni-446781 --memory=2200 --alsologtostderr   | newest-cni-446781            | jenkins | v1.35.0 | 27 Jan 25 03:06 UTC | 27 Jan 25 03:07 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-446781             | newest-cni-446781            | jenkins | v1.35.0 | 27 Jan 25 03:07 UTC | 27 Jan 25 03:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-446781                                   | newest-cni-446781            | jenkins | v1.35.0 | 27 Jan 25 03:07 UTC | 27 Jan 25 03:07 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-446781                  | newest-cni-446781            | jenkins | v1.35.0 | 27 Jan 25 03:07 UTC | 27 Jan 25 03:07 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-446781 --memory=2200 --alsologtostderr   | newest-cni-446781            | jenkins | v1.35.0 | 27 Jan 25 03:07 UTC | 27 Jan 25 03:07 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-446781 image list                           | newest-cni-446781            | jenkins | v1.35.0 | 27 Jan 25 03:07 UTC | 27 Jan 25 03:07 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-446781                                   | newest-cni-446781            | jenkins | v1.35.0 | 27 Jan 25 03:07 UTC | 27 Jan 25 03:07 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-446781                                   | newest-cni-446781            | jenkins | v1.35.0 | 27 Jan 25 03:07 UTC | 27 Jan 25 03:07 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-446781                                   | newest-cni-446781            | jenkins | v1.35.0 | 27 Jan 25 03:07 UTC | 27 Jan 25 03:07 UTC |
	| delete  | -p newest-cni-446781                                   | newest-cni-446781            | jenkins | v1.35.0 | 27 Jan 25 03:07 UTC | 27 Jan 25 03:07 UTC |
	| start   | -p auto-284111 --memory=3072                           | auto-284111                  | jenkins | v1.35.0 | 27 Jan 25 03:07 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 03:07:54
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 03:07:54.560451  952821 out.go:345] Setting OutFile to fd 1 ...
	I0127 03:07:54.560635  952821 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 03:07:54.560650  952821 out.go:358] Setting ErrFile to fd 2...
	I0127 03:07:54.560657  952821 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 03:07:54.561233  952821 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-897624/.minikube/bin
	I0127 03:07:54.561946  952821 out.go:352] Setting JSON to false
	I0127 03:07:54.563057  952821 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":13818,"bootTime":1737933457,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 03:07:54.563162  952821 start.go:139] virtualization: kvm guest
	I0127 03:07:54.565303  952821 out.go:177] * [auto-284111] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 03:07:54.566774  952821 out.go:177]   - MINIKUBE_LOCATION=20316
	I0127 03:07:54.566787  952821 notify.go:220] Checking for updates...
	I0127 03:07:54.569146  952821 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 03:07:54.570398  952821 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20316-897624/kubeconfig
	I0127 03:07:54.571515  952821 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-897624/.minikube
	I0127 03:07:54.572625  952821 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 03:07:54.573709  952821 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 03:07:54.575364  952821 config.go:182] Loaded profile config "default-k8s-diff-port-150897": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 03:07:54.575524  952821 config.go:182] Loaded profile config "no-preload-844432": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 03:07:54.575663  952821 config.go:182] Loaded profile config "old-k8s-version-542356": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 03:07:54.575794  952821 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 03:07:54.612405  952821 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 03:07:54.613585  952821 start.go:297] selected driver: kvm2
	I0127 03:07:54.613600  952821 start.go:901] validating driver "kvm2" against <nil>
	I0127 03:07:54.613613  952821 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 03:07:54.614371  952821 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 03:07:54.614461  952821 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20316-897624/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 03:07:54.630100  952821 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 03:07:54.630172  952821 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 03:07:54.630434  952821 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 03:07:54.630469  952821 cni.go:84] Creating CNI manager for ""
	I0127 03:07:54.630524  952821 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 03:07:54.630534  952821 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 03:07:54.630607  952821 start.go:340] cluster config:
	{Name:auto-284111 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:auto-284111 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin
:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInt
erval:1m0s}
	I0127 03:07:54.630731  952821 iso.go:125] acquiring lock: {Name:mkbbe08fa3daa2372045069667a0a52e0e34abd9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 03:07:54.632462  952821 out.go:177] * Starting "auto-284111" primary control-plane node in "auto-284111" cluster
	I0127 03:07:54.633546  952821 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 03:07:54.633591  952821 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 03:07:54.633606  952821 cache.go:56] Caching tarball of preloaded images
	I0127 03:07:54.633704  952821 preload.go:172] Found /home/jenkins/minikube-integration/20316-897624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 03:07:54.633717  952821 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 03:07:54.633825  952821 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/auto-284111/config.json ...
	I0127 03:07:54.633856  952821 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/auto-284111/config.json: {Name:mk9aac761e6a0c1edf631db916d37832a63a79b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:07:54.634010  952821 start.go:360] acquireMachinesLock for auto-284111: {Name:mke71211974c740845fb9b00041df14f4e3ecd74 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 03:07:54.634053  952821 start.go:364] duration metric: took 26.924µs to acquireMachinesLock for "auto-284111"
	I0127 03:07:54.634076  952821 start.go:93] Provisioning new machine with config: &{Name:auto-284111 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:auto-284111 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 03:07:54.634171  952821 start.go:125] createHost starting for "" (driver="kvm2")
	I0127 03:07:52.572438  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:07:55.076733  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:07:54.635642  952821 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0127 03:07:54.635788  952821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:07:54.635837  952821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:07:54.651175  952821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36455
	I0127 03:07:54.651620  952821 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:07:54.652209  952821 main.go:141] libmachine: Using API Version  1
	I0127 03:07:54.652230  952821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:07:54.652563  952821 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:07:54.652754  952821 main.go:141] libmachine: (auto-284111) Calling .GetMachineName
	I0127 03:07:54.652939  952821 main.go:141] libmachine: (auto-284111) Calling .DriverName
	I0127 03:07:54.653070  952821 start.go:159] libmachine.API.Create for "auto-284111" (driver="kvm2")
	I0127 03:07:54.653102  952821 client.go:168] LocalClient.Create starting
	I0127 03:07:54.653137  952821 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem
	I0127 03:07:54.653181  952821 main.go:141] libmachine: Decoding PEM data...
	I0127 03:07:54.653202  952821 main.go:141] libmachine: Parsing certificate...
	I0127 03:07:54.653269  952821 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20316-897624/.minikube/certs/cert.pem
	I0127 03:07:54.653296  952821 main.go:141] libmachine: Decoding PEM data...
	I0127 03:07:54.653314  952821 main.go:141] libmachine: Parsing certificate...
	I0127 03:07:54.653338  952821 main.go:141] libmachine: Running pre-create checks...
	I0127 03:07:54.653349  952821 main.go:141] libmachine: (auto-284111) Calling .PreCreateCheck
	I0127 03:07:54.653727  952821 main.go:141] libmachine: (auto-284111) Calling .GetConfigRaw
	I0127 03:07:54.654113  952821 main.go:141] libmachine: Creating machine...
	I0127 03:07:54.654126  952821 main.go:141] libmachine: (auto-284111) Calling .Create
	I0127 03:07:54.654231  952821 main.go:141] libmachine: (auto-284111) creating KVM machine...
	I0127 03:07:54.654250  952821 main.go:141] libmachine: (auto-284111) creating network...
	I0127 03:07:54.655555  952821 main.go:141] libmachine: (auto-284111) DBG | found existing default KVM network
	I0127 03:07:54.656845  952821 main.go:141] libmachine: (auto-284111) DBG | I0127 03:07:54.656702  952844 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:9e:80:59} reservation:<nil>}
	I0127 03:07:54.657950  952821 main.go:141] libmachine: (auto-284111) DBG | I0127 03:07:54.657881  952844 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:27:c5:54} reservation:<nil>}
	I0127 03:07:54.659203  952821 main.go:141] libmachine: (auto-284111) DBG | I0127 03:07:54.659126  952844 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a51b0}
	I0127 03:07:54.659229  952821 main.go:141] libmachine: (auto-284111) DBG | created network xml: 
	I0127 03:07:54.659238  952821 main.go:141] libmachine: (auto-284111) DBG | <network>
	I0127 03:07:54.659252  952821 main.go:141] libmachine: (auto-284111) DBG |   <name>mk-auto-284111</name>
	I0127 03:07:54.659271  952821 main.go:141] libmachine: (auto-284111) DBG |   <dns enable='no'/>
	I0127 03:07:54.659281  952821 main.go:141] libmachine: (auto-284111) DBG |   
	I0127 03:07:54.659294  952821 main.go:141] libmachine: (auto-284111) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0127 03:07:54.659316  952821 main.go:141] libmachine: (auto-284111) DBG |     <dhcp>
	I0127 03:07:54.659326  952821 main.go:141] libmachine: (auto-284111) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0127 03:07:54.659334  952821 main.go:141] libmachine: (auto-284111) DBG |     </dhcp>
	I0127 03:07:54.659342  952821 main.go:141] libmachine: (auto-284111) DBG |   </ip>
	I0127 03:07:54.659345  952821 main.go:141] libmachine: (auto-284111) DBG |   
	I0127 03:07:54.659350  952821 main.go:141] libmachine: (auto-284111) DBG | </network>
	I0127 03:07:54.659354  952821 main.go:141] libmachine: (auto-284111) DBG | 
	I0127 03:07:54.664237  952821 main.go:141] libmachine: (auto-284111) DBG | trying to create private KVM network mk-auto-284111 192.168.61.0/24...
	I0127 03:07:54.737385  952821 main.go:141] libmachine: (auto-284111) DBG | private KVM network mk-auto-284111 192.168.61.0/24 created
	I0127 03:07:54.737538  952821 main.go:141] libmachine: (auto-284111) DBG | I0127 03:07:54.737338  952844 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20316-897624/.minikube
	I0127 03:07:54.737572  952821 main.go:141] libmachine: (auto-284111) setting up store path in /home/jenkins/minikube-integration/20316-897624/.minikube/machines/auto-284111 ...
	I0127 03:07:54.737599  952821 main.go:141] libmachine: (auto-284111) building disk image from file:///home/jenkins/minikube-integration/20316-897624/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 03:07:54.737621  952821 main.go:141] libmachine: (auto-284111) Downloading /home/jenkins/minikube-integration/20316-897624/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20316-897624/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0127 03:07:55.029220  952821 main.go:141] libmachine: (auto-284111) DBG | I0127 03:07:55.029073  952844 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/auto-284111/id_rsa...
	I0127 03:07:55.164851  952821 main.go:141] libmachine: (auto-284111) DBG | I0127 03:07:55.164700  952844 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/auto-284111/auto-284111.rawdisk...
	I0127 03:07:55.164881  952821 main.go:141] libmachine: (auto-284111) DBG | Writing magic tar header
	I0127 03:07:55.164891  952821 main.go:141] libmachine: (auto-284111) DBG | Writing SSH key tar header
	I0127 03:07:55.164898  952821 main.go:141] libmachine: (auto-284111) DBG | I0127 03:07:55.164837  952844 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20316-897624/.minikube/machines/auto-284111 ...
	I0127 03:07:55.165006  952821 main.go:141] libmachine: (auto-284111) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/auto-284111
	I0127 03:07:55.165050  952821 main.go:141] libmachine: (auto-284111) setting executable bit set on /home/jenkins/minikube-integration/20316-897624/.minikube/machines/auto-284111 (perms=drwx------)
	I0127 03:07:55.165073  952821 main.go:141] libmachine: (auto-284111) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20316-897624/.minikube/machines
	I0127 03:07:55.165087  952821 main.go:141] libmachine: (auto-284111) setting executable bit set on /home/jenkins/minikube-integration/20316-897624/.minikube/machines (perms=drwxr-xr-x)
	I0127 03:07:55.165101  952821 main.go:141] libmachine: (auto-284111) setting executable bit set on /home/jenkins/minikube-integration/20316-897624/.minikube (perms=drwxr-xr-x)
	I0127 03:07:55.165111  952821 main.go:141] libmachine: (auto-284111) setting executable bit set on /home/jenkins/minikube-integration/20316-897624 (perms=drwxrwxr-x)
	I0127 03:07:55.165118  952821 main.go:141] libmachine: (auto-284111) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20316-897624/.minikube
	I0127 03:07:55.165137  952821 main.go:141] libmachine: (auto-284111) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20316-897624
	I0127 03:07:55.165151  952821 main.go:141] libmachine: (auto-284111) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0127 03:07:55.165160  952821 main.go:141] libmachine: (auto-284111) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0127 03:07:55.165178  952821 main.go:141] libmachine: (auto-284111) DBG | checking permissions on dir: /home/jenkins
	I0127 03:07:55.165189  952821 main.go:141] libmachine: (auto-284111) DBG | checking permissions on dir: /home
	I0127 03:07:55.165199  952821 main.go:141] libmachine: (auto-284111) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0127 03:07:55.165208  952821 main.go:141] libmachine: (auto-284111) creating domain...
	I0127 03:07:55.165225  952821 main.go:141] libmachine: (auto-284111) DBG | skipping /home - not owner
	I0127 03:07:55.166353  952821 main.go:141] libmachine: (auto-284111) define libvirt domain using xml: 
	I0127 03:07:55.166373  952821 main.go:141] libmachine: (auto-284111) <domain type='kvm'>
	I0127 03:07:55.166383  952821 main.go:141] libmachine: (auto-284111)   <name>auto-284111</name>
	I0127 03:07:55.166395  952821 main.go:141] libmachine: (auto-284111)   <memory unit='MiB'>3072</memory>
	I0127 03:07:55.166401  952821 main.go:141] libmachine: (auto-284111)   <vcpu>2</vcpu>
	I0127 03:07:55.166405  952821 main.go:141] libmachine: (auto-284111)   <features>
	I0127 03:07:55.166414  952821 main.go:141] libmachine: (auto-284111)     <acpi/>
	I0127 03:07:55.166425  952821 main.go:141] libmachine: (auto-284111)     <apic/>
	I0127 03:07:55.166433  952821 main.go:141] libmachine: (auto-284111)     <pae/>
	I0127 03:07:55.166444  952821 main.go:141] libmachine: (auto-284111)     
	I0127 03:07:55.166451  952821 main.go:141] libmachine: (auto-284111)   </features>
	I0127 03:07:55.166457  952821 main.go:141] libmachine: (auto-284111)   <cpu mode='host-passthrough'>
	I0127 03:07:55.166490  952821 main.go:141] libmachine: (auto-284111)   
	I0127 03:07:55.166513  952821 main.go:141] libmachine: (auto-284111)   </cpu>
	I0127 03:07:55.166523  952821 main.go:141] libmachine: (auto-284111)   <os>
	I0127 03:07:55.166539  952821 main.go:141] libmachine: (auto-284111)     <type>hvm</type>
	I0127 03:07:55.166551  952821 main.go:141] libmachine: (auto-284111)     <boot dev='cdrom'/>
	I0127 03:07:55.166559  952821 main.go:141] libmachine: (auto-284111)     <boot dev='hd'/>
	I0127 03:07:55.166573  952821 main.go:141] libmachine: (auto-284111)     <bootmenu enable='no'/>
	I0127 03:07:55.166583  952821 main.go:141] libmachine: (auto-284111)   </os>
	I0127 03:07:55.166593  952821 main.go:141] libmachine: (auto-284111)   <devices>
	I0127 03:07:55.166605  952821 main.go:141] libmachine: (auto-284111)     <disk type='file' device='cdrom'>
	I0127 03:07:55.166633  952821 main.go:141] libmachine: (auto-284111)       <source file='/home/jenkins/minikube-integration/20316-897624/.minikube/machines/auto-284111/boot2docker.iso'/>
	I0127 03:07:55.166650  952821 main.go:141] libmachine: (auto-284111)       <target dev='hdc' bus='scsi'/>
	I0127 03:07:55.166663  952821 main.go:141] libmachine: (auto-284111)       <readonly/>
	I0127 03:07:55.166671  952821 main.go:141] libmachine: (auto-284111)     </disk>
	I0127 03:07:55.166679  952821 main.go:141] libmachine: (auto-284111)     <disk type='file' device='disk'>
	I0127 03:07:55.166698  952821 main.go:141] libmachine: (auto-284111)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0127 03:07:55.166713  952821 main.go:141] libmachine: (auto-284111)       <source file='/home/jenkins/minikube-integration/20316-897624/.minikube/machines/auto-284111/auto-284111.rawdisk'/>
	I0127 03:07:55.166724  952821 main.go:141] libmachine: (auto-284111)       <target dev='hda' bus='virtio'/>
	I0127 03:07:55.166732  952821 main.go:141] libmachine: (auto-284111)     </disk>
	I0127 03:07:55.166749  952821 main.go:141] libmachine: (auto-284111)     <interface type='network'>
	I0127 03:07:55.166758  952821 main.go:141] libmachine: (auto-284111)       <source network='mk-auto-284111'/>
	I0127 03:07:55.166766  952821 main.go:141] libmachine: (auto-284111)       <model type='virtio'/>
	I0127 03:07:55.166778  952821 main.go:141] libmachine: (auto-284111)     </interface>
	I0127 03:07:55.166790  952821 main.go:141] libmachine: (auto-284111)     <interface type='network'>
	I0127 03:07:55.166800  952821 main.go:141] libmachine: (auto-284111)       <source network='default'/>
	I0127 03:07:55.166807  952821 main.go:141] libmachine: (auto-284111)       <model type='virtio'/>
	I0127 03:07:55.166815  952821 main.go:141] libmachine: (auto-284111)     </interface>
	I0127 03:07:55.166821  952821 main.go:141] libmachine: (auto-284111)     <serial type='pty'>
	I0127 03:07:55.166829  952821 main.go:141] libmachine: (auto-284111)       <target port='0'/>
	I0127 03:07:55.166842  952821 main.go:141] libmachine: (auto-284111)     </serial>
	I0127 03:07:55.166860  952821 main.go:141] libmachine: (auto-284111)     <console type='pty'>
	I0127 03:07:55.166873  952821 main.go:141] libmachine: (auto-284111)       <target type='serial' port='0'/>
	I0127 03:07:55.166882  952821 main.go:141] libmachine: (auto-284111)     </console>
	I0127 03:07:55.166898  952821 main.go:141] libmachine: (auto-284111)     <rng model='virtio'>
	I0127 03:07:55.166912  952821 main.go:141] libmachine: (auto-284111)       <backend model='random'>/dev/random</backend>
	I0127 03:07:55.166926  952821 main.go:141] libmachine: (auto-284111)     </rng>
	I0127 03:07:55.166938  952821 main.go:141] libmachine: (auto-284111)     
	I0127 03:07:55.166947  952821 main.go:141] libmachine: (auto-284111)     
	I0127 03:07:55.166956  952821 main.go:141] libmachine: (auto-284111)   </devices>
	I0127 03:07:55.166978  952821 main.go:141] libmachine: (auto-284111) </domain>
	I0127 03:07:55.166993  952821 main.go:141] libmachine: (auto-284111) 
	I0127 03:07:55.171028  952821 main.go:141] libmachine: (auto-284111) DBG | domain auto-284111 has defined MAC address 52:54:00:c7:47:a6 in network default
	I0127 03:07:55.171601  952821 main.go:141] libmachine: (auto-284111) DBG | domain auto-284111 has defined MAC address 52:54:00:d9:35:20 in network mk-auto-284111
	I0127 03:07:55.171616  952821 main.go:141] libmachine: (auto-284111) starting domain...
	I0127 03:07:55.171624  952821 main.go:141] libmachine: (auto-284111) ensuring networks are active...
	I0127 03:07:55.172277  952821 main.go:141] libmachine: (auto-284111) Ensuring network default is active
	I0127 03:07:55.172612  952821 main.go:141] libmachine: (auto-284111) Ensuring network mk-auto-284111 is active
	I0127 03:07:55.173268  952821 main.go:141] libmachine: (auto-284111) getting domain XML...
	I0127 03:07:55.173959  952821 main.go:141] libmachine: (auto-284111) creating domain...
	I0127 03:07:56.413575  952821 main.go:141] libmachine: (auto-284111) waiting for IP...
	I0127 03:07:56.414390  952821 main.go:141] libmachine: (auto-284111) DBG | domain auto-284111 has defined MAC address 52:54:00:d9:35:20 in network mk-auto-284111
	I0127 03:07:56.414811  952821 main.go:141] libmachine: (auto-284111) DBG | unable to find current IP address of domain auto-284111 in network mk-auto-284111
	I0127 03:07:56.414872  952821 main.go:141] libmachine: (auto-284111) DBG | I0127 03:07:56.414830  952844 retry.go:31] will retry after 275.472566ms: waiting for domain to come up
	I0127 03:07:56.692546  952821 main.go:141] libmachine: (auto-284111) DBG | domain auto-284111 has defined MAC address 52:54:00:d9:35:20 in network mk-auto-284111
	I0127 03:07:56.693172  952821 main.go:141] libmachine: (auto-284111) DBG | unable to find current IP address of domain auto-284111 in network mk-auto-284111
	I0127 03:07:56.693218  952821 main.go:141] libmachine: (auto-284111) DBG | I0127 03:07:56.693165  952844 retry.go:31] will retry after 258.765522ms: waiting for domain to come up
	I0127 03:07:56.953338  952821 main.go:141] libmachine: (auto-284111) DBG | domain auto-284111 has defined MAC address 52:54:00:d9:35:20 in network mk-auto-284111
	I0127 03:07:56.953817  952821 main.go:141] libmachine: (auto-284111) DBG | unable to find current IP address of domain auto-284111 in network mk-auto-284111
	I0127 03:07:56.953844  952821 main.go:141] libmachine: (auto-284111) DBG | I0127 03:07:56.953786  952844 retry.go:31] will retry after 400.334846ms: waiting for domain to come up
	I0127 03:07:57.355443  952821 main.go:141] libmachine: (auto-284111) DBG | domain auto-284111 has defined MAC address 52:54:00:d9:35:20 in network mk-auto-284111
	I0127 03:07:57.356066  952821 main.go:141] libmachine: (auto-284111) DBG | unable to find current IP address of domain auto-284111 in network mk-auto-284111
	I0127 03:07:57.356117  952821 main.go:141] libmachine: (auto-284111) DBG | I0127 03:07:57.356052  952844 retry.go:31] will retry after 536.936731ms: waiting for domain to come up
	I0127 03:07:57.894916  952821 main.go:141] libmachine: (auto-284111) DBG | domain auto-284111 has defined MAC address 52:54:00:d9:35:20 in network mk-auto-284111
	I0127 03:07:57.895483  952821 main.go:141] libmachine: (auto-284111) DBG | unable to find current IP address of domain auto-284111 in network mk-auto-284111
	I0127 03:07:57.895509  952821 main.go:141] libmachine: (auto-284111) DBG | I0127 03:07:57.895439  952844 retry.go:31] will retry after 575.578939ms: waiting for domain to come up
	I0127 03:07:58.472279  952821 main.go:141] libmachine: (auto-284111) DBG | domain auto-284111 has defined MAC address 52:54:00:d9:35:20 in network mk-auto-284111
	I0127 03:07:58.472860  952821 main.go:141] libmachine: (auto-284111) DBG | unable to find current IP address of domain auto-284111 in network mk-auto-284111
	I0127 03:07:58.472892  952821 main.go:141] libmachine: (auto-284111) DBG | I0127 03:07:58.472800  952844 retry.go:31] will retry after 855.050083ms: waiting for domain to come up
	I0127 03:07:59.329664  952821 main.go:141] libmachine: (auto-284111) DBG | domain auto-284111 has defined MAC address 52:54:00:d9:35:20 in network mk-auto-284111
	I0127 03:07:59.330094  952821 main.go:141] libmachine: (auto-284111) DBG | unable to find current IP address of domain auto-284111 in network mk-auto-284111
	I0127 03:07:59.330119  952821 main.go:141] libmachine: (auto-284111) DBG | I0127 03:07:59.330059  952844 retry.go:31] will retry after 1.022261114s: waiting for domain to come up
	I0127 03:07:57.570108  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:07:59.570300  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:08:00.353773  952821 main.go:141] libmachine: (auto-284111) DBG | domain auto-284111 has defined MAC address 52:54:00:d9:35:20 in network mk-auto-284111
	I0127 03:08:00.354183  952821 main.go:141] libmachine: (auto-284111) DBG | unable to find current IP address of domain auto-284111 in network mk-auto-284111
	I0127 03:08:00.354210  952821 main.go:141] libmachine: (auto-284111) DBG | I0127 03:08:00.354141  952844 retry.go:31] will retry after 1.049306206s: waiting for domain to come up
	I0127 03:08:01.405362  952821 main.go:141] libmachine: (auto-284111) DBG | domain auto-284111 has defined MAC address 52:54:00:d9:35:20 in network mk-auto-284111
	I0127 03:08:01.405809  952821 main.go:141] libmachine: (auto-284111) DBG | unable to find current IP address of domain auto-284111 in network mk-auto-284111
	I0127 03:08:01.405840  952821 main.go:141] libmachine: (auto-284111) DBG | I0127 03:08:01.405763  952844 retry.go:31] will retry after 1.652174205s: waiting for domain to come up
	I0127 03:08:03.059255  952821 main.go:141] libmachine: (auto-284111) DBG | domain auto-284111 has defined MAC address 52:54:00:d9:35:20 in network mk-auto-284111
	I0127 03:08:03.059702  952821 main.go:141] libmachine: (auto-284111) DBG | unable to find current IP address of domain auto-284111 in network mk-auto-284111
	I0127 03:08:03.059750  952821 main.go:141] libmachine: (auto-284111) DBG | I0127 03:08:03.059677  952844 retry.go:31] will retry after 1.553848351s: waiting for domain to come up
	I0127 03:08:02.072663  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:08:04.570416  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:08:06.570994  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:08:04.615428  952821 main.go:141] libmachine: (auto-284111) DBG | domain auto-284111 has defined MAC address 52:54:00:d9:35:20 in network mk-auto-284111
	I0127 03:08:04.615965  952821 main.go:141] libmachine: (auto-284111) DBG | unable to find current IP address of domain auto-284111 in network mk-auto-284111
	I0127 03:08:04.615998  952821 main.go:141] libmachine: (auto-284111) DBG | I0127 03:08:04.615916  952844 retry.go:31] will retry after 2.342413584s: waiting for domain to come up
	I0127 03:08:06.960652  952821 main.go:141] libmachine: (auto-284111) DBG | domain auto-284111 has defined MAC address 52:54:00:d9:35:20 in network mk-auto-284111
	I0127 03:08:06.961106  952821 main.go:141] libmachine: (auto-284111) DBG | unable to find current IP address of domain auto-284111 in network mk-auto-284111
	I0127 03:08:06.961140  952821 main.go:141] libmachine: (auto-284111) DBG | I0127 03:08:06.961075  952844 retry.go:31] will retry after 3.26934966s: waiting for domain to come up
	I0127 03:08:08.571244  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:08:11.070289  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:08:10.232543  952821 main.go:141] libmachine: (auto-284111) DBG | domain auto-284111 has defined MAC address 52:54:00:d9:35:20 in network mk-auto-284111
	I0127 03:08:10.233093  952821 main.go:141] libmachine: (auto-284111) DBG | unable to find current IP address of domain auto-284111 in network mk-auto-284111
	I0127 03:08:10.233114  952821 main.go:141] libmachine: (auto-284111) DBG | I0127 03:08:10.233052  952844 retry.go:31] will retry after 4.009349524s: waiting for domain to come up
	I0127 03:08:14.247146  952821 main.go:141] libmachine: (auto-284111) DBG | domain auto-284111 has defined MAC address 52:54:00:d9:35:20 in network mk-auto-284111
	I0127 03:08:14.247649  952821 main.go:141] libmachine: (auto-284111) DBG | unable to find current IP address of domain auto-284111 in network mk-auto-284111
	I0127 03:08:14.247671  952821 main.go:141] libmachine: (auto-284111) DBG | I0127 03:08:14.247619  952844 retry.go:31] will retry after 3.450546112s: waiting for domain to come up
	I0127 03:08:13.570517  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:08:15.570755  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:08:17.701702  952821 main.go:141] libmachine: (auto-284111) DBG | domain auto-284111 has defined MAC address 52:54:00:d9:35:20 in network mk-auto-284111
	I0127 03:08:17.702172  952821 main.go:141] libmachine: (auto-284111) found domain IP: 192.168.61.8
	I0127 03:08:17.702194  952821 main.go:141] libmachine: (auto-284111) reserving static IP address...
	I0127 03:08:17.702203  952821 main.go:141] libmachine: (auto-284111) DBG | domain auto-284111 has current primary IP address 192.168.61.8 and MAC address 52:54:00:d9:35:20 in network mk-auto-284111
	I0127 03:08:17.702564  952821 main.go:141] libmachine: (auto-284111) DBG | unable to find host DHCP lease matching {name: "auto-284111", mac: "52:54:00:d9:35:20", ip: "192.168.61.8"} in network mk-auto-284111
	I0127 03:08:17.782280  952821 main.go:141] libmachine: (auto-284111) DBG | Getting to WaitForSSH function...
	I0127 03:08:17.782321  952821 main.go:141] libmachine: (auto-284111) reserved static IP address 192.168.61.8 for domain auto-284111
	I0127 03:08:17.782335  952821 main.go:141] libmachine: (auto-284111) waiting for SSH...
	I0127 03:08:17.784865  952821 main.go:141] libmachine: (auto-284111) DBG | domain auto-284111 has defined MAC address 52:54:00:d9:35:20 in network mk-auto-284111
	I0127 03:08:17.785380  952821 main.go:141] libmachine: (auto-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:35:20", ip: ""} in network mk-auto-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:08:09 +0000 UTC Type:0 Mac:52:54:00:d9:35:20 Iaid: IPaddr:192.168.61.8 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d9:35:20}
	I0127 03:08:17.785412  952821 main.go:141] libmachine: (auto-284111) DBG | domain auto-284111 has defined IP address 192.168.61.8 and MAC address 52:54:00:d9:35:20 in network mk-auto-284111
	I0127 03:08:17.785470  952821 main.go:141] libmachine: (auto-284111) DBG | Using SSH client type: external
	I0127 03:08:17.785492  952821 main.go:141] libmachine: (auto-284111) DBG | Using SSH private key: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/auto-284111/id_rsa (-rw-------)
	I0127 03:08:17.785538  952821 main.go:141] libmachine: (auto-284111) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.8 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20316-897624/.minikube/machines/auto-284111/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 03:08:17.785561  952821 main.go:141] libmachine: (auto-284111) DBG | About to run SSH command:
	I0127 03:08:17.785584  952821 main.go:141] libmachine: (auto-284111) DBG | exit 0
	I0127 03:08:17.917155  952821 main.go:141] libmachine: (auto-284111) DBG | SSH cmd err, output: <nil>: 
	I0127 03:08:17.917448  952821 main.go:141] libmachine: (auto-284111) KVM machine creation complete
	I0127 03:08:17.917836  952821 main.go:141] libmachine: (auto-284111) Calling .GetConfigRaw
	I0127 03:08:17.918445  952821 main.go:141] libmachine: (auto-284111) Calling .DriverName
	I0127 03:08:17.918638  952821 main.go:141] libmachine: (auto-284111) Calling .DriverName
	I0127 03:08:17.918847  952821 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0127 03:08:17.918867  952821 main.go:141] libmachine: (auto-284111) Calling .GetState
	I0127 03:08:17.920255  952821 main.go:141] libmachine: Detecting operating system of created instance...
	I0127 03:08:17.920268  952821 main.go:141] libmachine: Waiting for SSH to be available...
	I0127 03:08:17.920273  952821 main.go:141] libmachine: Getting to WaitForSSH function...
	I0127 03:08:17.920278  952821 main.go:141] libmachine: (auto-284111) Calling .GetSSHHostname
	I0127 03:08:17.922732  952821 main.go:141] libmachine: (auto-284111) DBG | domain auto-284111 has defined MAC address 52:54:00:d9:35:20 in network mk-auto-284111
	I0127 03:08:17.923180  952821 main.go:141] libmachine: (auto-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:35:20", ip: ""} in network mk-auto-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:08:09 +0000 UTC Type:0 Mac:52:54:00:d9:35:20 Iaid: IPaddr:192.168.61.8 Prefix:24 Hostname:auto-284111 Clientid:01:52:54:00:d9:35:20}
	I0127 03:08:17.923210  952821 main.go:141] libmachine: (auto-284111) DBG | domain auto-284111 has defined IP address 192.168.61.8 and MAC address 52:54:00:d9:35:20 in network mk-auto-284111
	I0127 03:08:17.923413  952821 main.go:141] libmachine: (auto-284111) Calling .GetSSHPort
	I0127 03:08:17.923598  952821 main.go:141] libmachine: (auto-284111) Calling .GetSSHKeyPath
	I0127 03:08:17.923722  952821 main.go:141] libmachine: (auto-284111) Calling .GetSSHKeyPath
	I0127 03:08:17.923852  952821 main.go:141] libmachine: (auto-284111) Calling .GetSSHUsername
	I0127 03:08:17.924023  952821 main.go:141] libmachine: Using SSH client type: native
	I0127 03:08:17.924253  952821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.8 22 <nil> <nil>}
	I0127 03:08:17.924273  952821 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0127 03:08:18.032141  952821 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 03:08:18.032177  952821 main.go:141] libmachine: Detecting the provisioner...
	I0127 03:08:18.032189  952821 main.go:141] libmachine: (auto-284111) Calling .GetSSHHostname
	I0127 03:08:18.035407  952821 main.go:141] libmachine: (auto-284111) DBG | domain auto-284111 has defined MAC address 52:54:00:d9:35:20 in network mk-auto-284111
	I0127 03:08:18.035723  952821 main.go:141] libmachine: (auto-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:35:20", ip: ""} in network mk-auto-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:08:09 +0000 UTC Type:0 Mac:52:54:00:d9:35:20 Iaid: IPaddr:192.168.61.8 Prefix:24 Hostname:auto-284111 Clientid:01:52:54:00:d9:35:20}
	I0127 03:08:18.035757  952821 main.go:141] libmachine: (auto-284111) DBG | domain auto-284111 has defined IP address 192.168.61.8 and MAC address 52:54:00:d9:35:20 in network mk-auto-284111
	I0127 03:08:18.035888  952821 main.go:141] libmachine: (auto-284111) Calling .GetSSHPort
	I0127 03:08:18.036163  952821 main.go:141] libmachine: (auto-284111) Calling .GetSSHKeyPath
	I0127 03:08:18.036323  952821 main.go:141] libmachine: (auto-284111) Calling .GetSSHKeyPath
	I0127 03:08:18.036462  952821 main.go:141] libmachine: (auto-284111) Calling .GetSSHUsername
	I0127 03:08:18.036647  952821 main.go:141] libmachine: Using SSH client type: native
	I0127 03:08:18.036826  952821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.8 22 <nil> <nil>}
	I0127 03:08:18.036836  952821 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0127 03:08:18.145661  952821 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0127 03:08:18.145762  952821 main.go:141] libmachine: found compatible host: buildroot
	I0127 03:08:18.145772  952821 main.go:141] libmachine: Provisioning with buildroot...
	I0127 03:08:18.145780  952821 main.go:141] libmachine: (auto-284111) Calling .GetMachineName
	I0127 03:08:18.146098  952821 buildroot.go:166] provisioning hostname "auto-284111"
	I0127 03:08:18.146134  952821 main.go:141] libmachine: (auto-284111) Calling .GetMachineName
	I0127 03:08:18.146351  952821 main.go:141] libmachine: (auto-284111) Calling .GetSSHHostname
	I0127 03:08:18.149208  952821 main.go:141] libmachine: (auto-284111) DBG | domain auto-284111 has defined MAC address 52:54:00:d9:35:20 in network mk-auto-284111
	I0127 03:08:18.149530  952821 main.go:141] libmachine: (auto-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:35:20", ip: ""} in network mk-auto-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:08:09 +0000 UTC Type:0 Mac:52:54:00:d9:35:20 Iaid: IPaddr:192.168.61.8 Prefix:24 Hostname:auto-284111 Clientid:01:52:54:00:d9:35:20}
	I0127 03:08:18.149560  952821 main.go:141] libmachine: (auto-284111) DBG | domain auto-284111 has defined IP address 192.168.61.8 and MAC address 52:54:00:d9:35:20 in network mk-auto-284111
	I0127 03:08:18.149756  952821 main.go:141] libmachine: (auto-284111) Calling .GetSSHPort
	I0127 03:08:18.149947  952821 main.go:141] libmachine: (auto-284111) Calling .GetSSHKeyPath
	I0127 03:08:18.150107  952821 main.go:141] libmachine: (auto-284111) Calling .GetSSHKeyPath
	I0127 03:08:18.150225  952821 main.go:141] libmachine: (auto-284111) Calling .GetSSHUsername
	I0127 03:08:18.150400  952821 main.go:141] libmachine: Using SSH client type: native
	I0127 03:08:18.150630  952821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.8 22 <nil> <nil>}
	I0127 03:08:18.150648  952821 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-284111 && echo "auto-284111" | sudo tee /etc/hostname
	I0127 03:08:18.271890  952821 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-284111
	
	I0127 03:08:18.271929  952821 main.go:141] libmachine: (auto-284111) Calling .GetSSHHostname
	I0127 03:08:18.274826  952821 main.go:141] libmachine: (auto-284111) DBG | domain auto-284111 has defined MAC address 52:54:00:d9:35:20 in network mk-auto-284111
	I0127 03:08:18.275246  952821 main.go:141] libmachine: (auto-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:35:20", ip: ""} in network mk-auto-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:08:09 +0000 UTC Type:0 Mac:52:54:00:d9:35:20 Iaid: IPaddr:192.168.61.8 Prefix:24 Hostname:auto-284111 Clientid:01:52:54:00:d9:35:20}
	I0127 03:08:18.275295  952821 main.go:141] libmachine: (auto-284111) DBG | domain auto-284111 has defined IP address 192.168.61.8 and MAC address 52:54:00:d9:35:20 in network mk-auto-284111
	I0127 03:08:18.275488  952821 main.go:141] libmachine: (auto-284111) Calling .GetSSHPort
	I0127 03:08:18.275706  952821 main.go:141] libmachine: (auto-284111) Calling .GetSSHKeyPath
	I0127 03:08:18.275877  952821 main.go:141] libmachine: (auto-284111) Calling .GetSSHKeyPath
	I0127 03:08:18.276028  952821 main.go:141] libmachine: (auto-284111) Calling .GetSSHUsername
	I0127 03:08:18.276202  952821 main.go:141] libmachine: Using SSH client type: native
	I0127 03:08:18.276423  952821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.8 22 <nil> <nil>}
	I0127 03:08:18.276454  952821 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-284111' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-284111/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-284111' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 03:08:18.397668  952821 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 03:08:18.397706  952821 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20316-897624/.minikube CaCertPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20316-897624/.minikube}
	I0127 03:08:18.397761  952821 buildroot.go:174] setting up certificates
	I0127 03:08:18.397779  952821 provision.go:84] configureAuth start
	I0127 03:08:18.397796  952821 main.go:141] libmachine: (auto-284111) Calling .GetMachineName
	I0127 03:08:18.398137  952821 main.go:141] libmachine: (auto-284111) Calling .GetIP
	I0127 03:08:18.400743  952821 main.go:141] libmachine: (auto-284111) DBG | domain auto-284111 has defined MAC address 52:54:00:d9:35:20 in network mk-auto-284111
	I0127 03:08:18.401153  952821 main.go:141] libmachine: (auto-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:35:20", ip: ""} in network mk-auto-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:08:09 +0000 UTC Type:0 Mac:52:54:00:d9:35:20 Iaid: IPaddr:192.168.61.8 Prefix:24 Hostname:auto-284111 Clientid:01:52:54:00:d9:35:20}
	I0127 03:08:18.401203  952821 main.go:141] libmachine: (auto-284111) DBG | domain auto-284111 has defined IP address 192.168.61.8 and MAC address 52:54:00:d9:35:20 in network mk-auto-284111
	I0127 03:08:18.401325  952821 main.go:141] libmachine: (auto-284111) Calling .GetSSHHostname
	I0127 03:08:18.403477  952821 main.go:141] libmachine: (auto-284111) DBG | domain auto-284111 has defined MAC address 52:54:00:d9:35:20 in network mk-auto-284111
	I0127 03:08:18.403838  952821 main.go:141] libmachine: (auto-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:35:20", ip: ""} in network mk-auto-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:08:09 +0000 UTC Type:0 Mac:52:54:00:d9:35:20 Iaid: IPaddr:192.168.61.8 Prefix:24 Hostname:auto-284111 Clientid:01:52:54:00:d9:35:20}
	I0127 03:08:18.403874  952821 main.go:141] libmachine: (auto-284111) DBG | domain auto-284111 has defined IP address 192.168.61.8 and MAC address 52:54:00:d9:35:20 in network mk-auto-284111
	I0127 03:08:18.404018  952821 provision.go:143] copyHostCerts
	I0127 03:08:18.404084  952821 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-897624/.minikube/ca.pem, removing ...
	I0127 03:08:18.404105  952821 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-897624/.minikube/ca.pem
	I0127 03:08:18.404202  952821 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20316-897624/.minikube/ca.pem (1078 bytes)
	I0127 03:08:18.404339  952821 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-897624/.minikube/cert.pem, removing ...
	I0127 03:08:18.404350  952821 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-897624/.minikube/cert.pem
	I0127 03:08:18.404379  952821 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20316-897624/.minikube/cert.pem (1123 bytes)
	I0127 03:08:18.404448  952821 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-897624/.minikube/key.pem, removing ...
	I0127 03:08:18.404456  952821 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-897624/.minikube/key.pem
	I0127 03:08:18.404481  952821 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20316-897624/.minikube/key.pem (1679 bytes)
	I0127 03:08:18.404545  952821 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca-key.pem org=jenkins.auto-284111 san=[127.0.0.1 192.168.61.8 auto-284111 localhost minikube]
	I0127 03:08:18.579120  952821 provision.go:177] copyRemoteCerts
	I0127 03:08:18.579217  952821 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 03:08:18.579256  952821 main.go:141] libmachine: (auto-284111) Calling .GetSSHHostname
	I0127 03:08:18.582111  952821 main.go:141] libmachine: (auto-284111) DBG | domain auto-284111 has defined MAC address 52:54:00:d9:35:20 in network mk-auto-284111
	I0127 03:08:18.582440  952821 main.go:141] libmachine: (auto-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:35:20", ip: ""} in network mk-auto-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:08:09 +0000 UTC Type:0 Mac:52:54:00:d9:35:20 Iaid: IPaddr:192.168.61.8 Prefix:24 Hostname:auto-284111 Clientid:01:52:54:00:d9:35:20}
	I0127 03:08:18.582477  952821 main.go:141] libmachine: (auto-284111) DBG | domain auto-284111 has defined IP address 192.168.61.8 and MAC address 52:54:00:d9:35:20 in network mk-auto-284111
	I0127 03:08:18.582656  952821 main.go:141] libmachine: (auto-284111) Calling .GetSSHPort
	I0127 03:08:18.582865  952821 main.go:141] libmachine: (auto-284111) Calling .GetSSHKeyPath
	I0127 03:08:18.583017  952821 main.go:141] libmachine: (auto-284111) Calling .GetSSHUsername
	I0127 03:08:18.583180  952821 sshutil.go:53] new ssh client: &{IP:192.168.61.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/auto-284111/id_rsa Username:docker}
	I0127 03:08:18.671136  952821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 03:08:18.694621  952821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0127 03:08:18.716749  952821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 03:08:18.739102  952821 provision.go:87] duration metric: took 341.30823ms to configureAuth
	I0127 03:08:18.739131  952821 buildroot.go:189] setting minikube options for container-runtime
	I0127 03:08:18.739311  952821 config.go:182] Loaded profile config "auto-284111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 03:08:18.739402  952821 main.go:141] libmachine: (auto-284111) Calling .GetSSHHostname
	I0127 03:08:18.742291  952821 main.go:141] libmachine: (auto-284111) DBG | domain auto-284111 has defined MAC address 52:54:00:d9:35:20 in network mk-auto-284111
	I0127 03:08:18.742670  952821 main.go:141] libmachine: (auto-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:35:20", ip: ""} in network mk-auto-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:08:09 +0000 UTC Type:0 Mac:52:54:00:d9:35:20 Iaid: IPaddr:192.168.61.8 Prefix:24 Hostname:auto-284111 Clientid:01:52:54:00:d9:35:20}
	I0127 03:08:18.742698  952821 main.go:141] libmachine: (auto-284111) DBG | domain auto-284111 has defined IP address 192.168.61.8 and MAC address 52:54:00:d9:35:20 in network mk-auto-284111
	I0127 03:08:18.742936  952821 main.go:141] libmachine: (auto-284111) Calling .GetSSHPort
	I0127 03:08:18.743162  952821 main.go:141] libmachine: (auto-284111) Calling .GetSSHKeyPath
	I0127 03:08:18.743327  952821 main.go:141] libmachine: (auto-284111) Calling .GetSSHKeyPath
	I0127 03:08:18.743573  952821 main.go:141] libmachine: (auto-284111) Calling .GetSSHUsername
	I0127 03:08:18.743722  952821 main.go:141] libmachine: Using SSH client type: native
	I0127 03:08:18.743955  952821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.8 22 <nil> <nil>}
	I0127 03:08:18.744009  952821 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 03:08:18.986822  952821 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 03:08:18.986854  952821 main.go:141] libmachine: Checking connection to Docker...
	I0127 03:08:18.986890  952821 main.go:141] libmachine: (auto-284111) Calling .GetURL
	I0127 03:08:18.988155  952821 main.go:141] libmachine: (auto-284111) DBG | using libvirt version 6000000
	I0127 03:08:18.990550  952821 main.go:141] libmachine: (auto-284111) DBG | domain auto-284111 has defined MAC address 52:54:00:d9:35:20 in network mk-auto-284111
	I0127 03:08:18.990980  952821 main.go:141] libmachine: (auto-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:35:20", ip: ""} in network mk-auto-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:08:09 +0000 UTC Type:0 Mac:52:54:00:d9:35:20 Iaid: IPaddr:192.168.61.8 Prefix:24 Hostname:auto-284111 Clientid:01:52:54:00:d9:35:20}
	I0127 03:08:18.991007  952821 main.go:141] libmachine: (auto-284111) DBG | domain auto-284111 has defined IP address 192.168.61.8 and MAC address 52:54:00:d9:35:20 in network mk-auto-284111
	I0127 03:08:18.991187  952821 main.go:141] libmachine: Docker is up and running!
	I0127 03:08:18.991201  952821 main.go:141] libmachine: Reticulating splines...
	I0127 03:08:18.991210  952821 client.go:171] duration metric: took 24.338096045s to LocalClient.Create
	I0127 03:08:18.991246  952821 start.go:167] duration metric: took 24.338176994s to libmachine.API.Create "auto-284111"
	I0127 03:08:18.991259  952821 start.go:293] postStartSetup for "auto-284111" (driver="kvm2")
	I0127 03:08:18.991272  952821 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 03:08:18.991307  952821 main.go:141] libmachine: (auto-284111) Calling .DriverName
	I0127 03:08:18.991571  952821 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 03:08:18.991595  952821 main.go:141] libmachine: (auto-284111) Calling .GetSSHHostname
	I0127 03:08:18.993936  952821 main.go:141] libmachine: (auto-284111) DBG | domain auto-284111 has defined MAC address 52:54:00:d9:35:20 in network mk-auto-284111
	I0127 03:08:18.994380  952821 main.go:141] libmachine: (auto-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:35:20", ip: ""} in network mk-auto-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:08:09 +0000 UTC Type:0 Mac:52:54:00:d9:35:20 Iaid: IPaddr:192.168.61.8 Prefix:24 Hostname:auto-284111 Clientid:01:52:54:00:d9:35:20}
	I0127 03:08:18.994421  952821 main.go:141] libmachine: (auto-284111) DBG | domain auto-284111 has defined IP address 192.168.61.8 and MAC address 52:54:00:d9:35:20 in network mk-auto-284111
	I0127 03:08:18.994580  952821 main.go:141] libmachine: (auto-284111) Calling .GetSSHPort
	I0127 03:08:18.994757  952821 main.go:141] libmachine: (auto-284111) Calling .GetSSHKeyPath
	I0127 03:08:18.994903  952821 main.go:141] libmachine: (auto-284111) Calling .GetSSHUsername
	I0127 03:08:18.995000  952821 sshutil.go:53] new ssh client: &{IP:192.168.61.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/auto-284111/id_rsa Username:docker}
	I0127 03:08:19.081005  952821 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 03:08:19.085058  952821 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 03:08:19.085092  952821 filesync.go:126] Scanning /home/jenkins/minikube-integration/20316-897624/.minikube/addons for local assets ...
	I0127 03:08:19.085169  952821 filesync.go:126] Scanning /home/jenkins/minikube-integration/20316-897624/.minikube/files for local assets ...
	I0127 03:08:19.085262  952821 filesync.go:149] local asset: /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem -> 9048892.pem in /etc/ssl/certs
	I0127 03:08:19.085381  952821 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 03:08:19.096371  952821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem --> /etc/ssl/certs/9048892.pem (1708 bytes)
	I0127 03:08:19.120767  952821 start.go:296] duration metric: took 129.491964ms for postStartSetup
	I0127 03:08:19.120832  952821 main.go:141] libmachine: (auto-284111) Calling .GetConfigRaw
	I0127 03:08:19.121504  952821 main.go:141] libmachine: (auto-284111) Calling .GetIP
	I0127 03:08:19.124394  952821 main.go:141] libmachine: (auto-284111) DBG | domain auto-284111 has defined MAC address 52:54:00:d9:35:20 in network mk-auto-284111
	I0127 03:08:19.124806  952821 main.go:141] libmachine: (auto-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:35:20", ip: ""} in network mk-auto-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:08:09 +0000 UTC Type:0 Mac:52:54:00:d9:35:20 Iaid: IPaddr:192.168.61.8 Prefix:24 Hostname:auto-284111 Clientid:01:52:54:00:d9:35:20}
	I0127 03:08:19.124841  952821 main.go:141] libmachine: (auto-284111) DBG | domain auto-284111 has defined IP address 192.168.61.8 and MAC address 52:54:00:d9:35:20 in network mk-auto-284111
	I0127 03:08:19.125100  952821 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/auto-284111/config.json ...
	I0127 03:08:19.125357  952821 start.go:128] duration metric: took 24.491172271s to createHost
	I0127 03:08:19.125394  952821 main.go:141] libmachine: (auto-284111) Calling .GetSSHHostname
	I0127 03:08:19.127910  952821 main.go:141] libmachine: (auto-284111) DBG | domain auto-284111 has defined MAC address 52:54:00:d9:35:20 in network mk-auto-284111
	I0127 03:08:19.128370  952821 main.go:141] libmachine: (auto-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:35:20", ip: ""} in network mk-auto-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:08:09 +0000 UTC Type:0 Mac:52:54:00:d9:35:20 Iaid: IPaddr:192.168.61.8 Prefix:24 Hostname:auto-284111 Clientid:01:52:54:00:d9:35:20}
	I0127 03:08:19.128404  952821 main.go:141] libmachine: (auto-284111) DBG | domain auto-284111 has defined IP address 192.168.61.8 and MAC address 52:54:00:d9:35:20 in network mk-auto-284111
	I0127 03:08:19.128591  952821 main.go:141] libmachine: (auto-284111) Calling .GetSSHPort
	I0127 03:08:19.128769  952821 main.go:141] libmachine: (auto-284111) Calling .GetSSHKeyPath
	I0127 03:08:19.128977  952821 main.go:141] libmachine: (auto-284111) Calling .GetSSHKeyPath
	I0127 03:08:19.129095  952821 main.go:141] libmachine: (auto-284111) Calling .GetSSHUsername
	I0127 03:08:19.129272  952821 main.go:141] libmachine: Using SSH client type: native
	I0127 03:08:19.129498  952821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.8 22 <nil> <nil>}
	I0127 03:08:19.129512  952821 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 03:08:19.237564  952821 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737947299.194716430
	
	I0127 03:08:19.237592  952821 fix.go:216] guest clock: 1737947299.194716430
	I0127 03:08:19.237599  952821 fix.go:229] Guest: 2025-01-27 03:08:19.19471643 +0000 UTC Remote: 2025-01-27 03:08:19.125376395 +0000 UTC m=+24.605675481 (delta=69.340035ms)
	I0127 03:08:19.237622  952821 fix.go:200] guest clock delta is within tolerance: 69.340035ms
	I0127 03:08:19.237627  952821 start.go:83] releasing machines lock for "auto-284111", held for 24.603563098s
	I0127 03:08:19.237645  952821 main.go:141] libmachine: (auto-284111) Calling .DriverName
	I0127 03:08:19.237945  952821 main.go:141] libmachine: (auto-284111) Calling .GetIP
	I0127 03:08:19.240673  952821 main.go:141] libmachine: (auto-284111) DBG | domain auto-284111 has defined MAC address 52:54:00:d9:35:20 in network mk-auto-284111
	I0127 03:08:19.241047  952821 main.go:141] libmachine: (auto-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:35:20", ip: ""} in network mk-auto-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:08:09 +0000 UTC Type:0 Mac:52:54:00:d9:35:20 Iaid: IPaddr:192.168.61.8 Prefix:24 Hostname:auto-284111 Clientid:01:52:54:00:d9:35:20}
	I0127 03:08:19.241078  952821 main.go:141] libmachine: (auto-284111) DBG | domain auto-284111 has defined IP address 192.168.61.8 and MAC address 52:54:00:d9:35:20 in network mk-auto-284111
	I0127 03:08:19.241266  952821 main.go:141] libmachine: (auto-284111) Calling .DriverName
	I0127 03:08:19.241770  952821 main.go:141] libmachine: (auto-284111) Calling .DriverName
	I0127 03:08:19.241957  952821 main.go:141] libmachine: (auto-284111) Calling .DriverName
	I0127 03:08:19.242071  952821 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 03:08:19.242125  952821 main.go:141] libmachine: (auto-284111) Calling .GetSSHHostname
	I0127 03:08:19.242180  952821 ssh_runner.go:195] Run: cat /version.json
	I0127 03:08:19.242211  952821 main.go:141] libmachine: (auto-284111) Calling .GetSSHHostname
	I0127 03:08:19.244781  952821 main.go:141] libmachine: (auto-284111) DBG | domain auto-284111 has defined MAC address 52:54:00:d9:35:20 in network mk-auto-284111
	I0127 03:08:19.245183  952821 main.go:141] libmachine: (auto-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:35:20", ip: ""} in network mk-auto-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:08:09 +0000 UTC Type:0 Mac:52:54:00:d9:35:20 Iaid: IPaddr:192.168.61.8 Prefix:24 Hostname:auto-284111 Clientid:01:52:54:00:d9:35:20}
	I0127 03:08:19.245207  952821 main.go:141] libmachine: (auto-284111) DBG | domain auto-284111 has defined IP address 192.168.61.8 and MAC address 52:54:00:d9:35:20 in network mk-auto-284111
	I0127 03:08:19.245226  952821 main.go:141] libmachine: (auto-284111) DBG | domain auto-284111 has defined MAC address 52:54:00:d9:35:20 in network mk-auto-284111
	I0127 03:08:19.245377  952821 main.go:141] libmachine: (auto-284111) Calling .GetSSHPort
	I0127 03:08:19.245556  952821 main.go:141] libmachine: (auto-284111) Calling .GetSSHKeyPath
	I0127 03:08:19.245707  952821 main.go:141] libmachine: (auto-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:35:20", ip: ""} in network mk-auto-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:08:09 +0000 UTC Type:0 Mac:52:54:00:d9:35:20 Iaid: IPaddr:192.168.61.8 Prefix:24 Hostname:auto-284111 Clientid:01:52:54:00:d9:35:20}
	I0127 03:08:19.245729  952821 main.go:141] libmachine: (auto-284111) DBG | domain auto-284111 has defined IP address 192.168.61.8 and MAC address 52:54:00:d9:35:20 in network mk-auto-284111
	I0127 03:08:19.245749  952821 main.go:141] libmachine: (auto-284111) Calling .GetSSHUsername
	I0127 03:08:19.245898  952821 main.go:141] libmachine: (auto-284111) Calling .GetSSHPort
	I0127 03:08:19.245977  952821 sshutil.go:53] new ssh client: &{IP:192.168.61.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/auto-284111/id_rsa Username:docker}
	I0127 03:08:19.246066  952821 main.go:141] libmachine: (auto-284111) Calling .GetSSHKeyPath
	I0127 03:08:19.246241  952821 main.go:141] libmachine: (auto-284111) Calling .GetSSHUsername
	I0127 03:08:19.246388  952821 sshutil.go:53] new ssh client: &{IP:192.168.61.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/auto-284111/id_rsa Username:docker}
	I0127 03:08:19.359019  952821 ssh_runner.go:195] Run: systemctl --version
	I0127 03:08:19.364717  952821 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 03:08:19.522342  952821 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 03:08:19.528093  952821 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 03:08:19.528161  952821 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 03:08:19.544295  952821 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 03:08:19.544323  952821 start.go:495] detecting cgroup driver to use...
	I0127 03:08:19.544395  952821 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 03:08:19.560719  952821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 03:08:19.575502  952821 docker.go:217] disabling cri-docker service (if available) ...
	I0127 03:08:19.575571  952821 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 03:08:19.591135  952821 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 03:08:19.606271  952821 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 03:08:19.733218  952821 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 03:08:19.898015  952821 docker.go:233] disabling docker service ...
	I0127 03:08:19.898101  952821 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 03:08:19.912233  952821 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 03:08:19.924948  952821 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 03:08:20.059370  952821 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 03:08:20.195122  952821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 03:08:20.208550  952821 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 03:08:20.225864  952821 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 03:08:20.225960  952821 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:08:20.235923  952821 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 03:08:20.235996  952821 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:08:20.246092  952821 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:08:20.256502  952821 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:08:20.267323  952821 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 03:08:20.278763  952821 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:08:20.288669  952821 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:08:20.304775  952821 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:08:20.314858  952821 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 03:08:20.323781  952821 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 03:08:20.323836  952821 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 03:08:20.337105  952821 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 03:08:20.346227  952821 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 03:08:20.463121  952821 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 03:08:20.554750  952821 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 03:08:20.554856  952821 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 03:08:20.559881  952821 start.go:563] Will wait 60s for crictl version
	I0127 03:08:20.559967  952821 ssh_runner.go:195] Run: which crictl
	I0127 03:08:20.563849  952821 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 03:08:20.607698  952821 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 03:08:20.607824  952821 ssh_runner.go:195] Run: crio --version
	I0127 03:08:20.637384  952821 ssh_runner.go:195] Run: crio --version
	I0127 03:08:20.667786  952821 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 03:08:18.069881  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:08:20.071334  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:08:23.310834  948597 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 03:08:23.311157  948597 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 03:08:23.311197  948597 kubeadm.go:310] 
	I0127 03:08:23.311268  948597 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 03:08:23.311329  948597 kubeadm.go:310] 		timed out waiting for the condition
	I0127 03:08:23.311340  948597 kubeadm.go:310] 
	I0127 03:08:23.311389  948597 kubeadm.go:310] 	This error is likely caused by:
	I0127 03:08:23.311462  948597 kubeadm.go:310] 		- The kubelet is not running
	I0127 03:08:23.311810  948597 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 03:08:23.311831  948597 kubeadm.go:310] 
	I0127 03:08:23.312005  948597 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 03:08:23.312067  948597 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 03:08:23.312119  948597 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 03:08:23.312130  948597 kubeadm.go:310] 
	I0127 03:08:23.312273  948597 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 03:08:23.312352  948597 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 03:08:23.312366  948597 kubeadm.go:310] 
	I0127 03:08:23.312514  948597 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 03:08:23.312631  948597 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 03:08:23.312738  948597 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 03:08:23.312844  948597 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 03:08:23.312858  948597 kubeadm.go:310] 
	I0127 03:08:23.313323  948597 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 03:08:23.313454  948597 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 03:08:23.313565  948597 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0127 03:08:23.313622  948597 kubeadm.go:394] duration metric: took 7m58.597329739s to StartCluster
	I0127 03:08:23.313680  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:08:23.313755  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:08:23.367103  948597 cri.go:89] found id: ""
	I0127 03:08:23.367143  948597 logs.go:282] 0 containers: []
	W0127 03:08:23.367157  948597 logs.go:284] No container was found matching "kube-apiserver"
	I0127 03:08:23.367165  948597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 03:08:23.367244  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:08:23.405957  948597 cri.go:89] found id: ""
	I0127 03:08:23.406002  948597 logs.go:282] 0 containers: []
	W0127 03:08:23.406013  948597 logs.go:284] No container was found matching "etcd"
	I0127 03:08:23.406021  948597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 03:08:23.406087  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:08:23.440876  948597 cri.go:89] found id: ""
	I0127 03:08:23.440944  948597 logs.go:282] 0 containers: []
	W0127 03:08:23.440960  948597 logs.go:284] No container was found matching "coredns"
	I0127 03:08:23.440973  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:08:23.441059  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:08:23.476013  948597 cri.go:89] found id: ""
	I0127 03:08:23.476054  948597 logs.go:282] 0 containers: []
	W0127 03:08:23.476066  948597 logs.go:284] No container was found matching "kube-scheduler"
	I0127 03:08:23.476075  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:08:23.476144  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:08:23.517449  948597 cri.go:89] found id: ""
	I0127 03:08:23.517486  948597 logs.go:282] 0 containers: []
	W0127 03:08:23.517498  948597 logs.go:284] No container was found matching "kube-proxy"
	I0127 03:08:23.517506  948597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:08:23.517572  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:08:23.561689  948597 cri.go:89] found id: ""
	I0127 03:08:23.561730  948597 logs.go:282] 0 containers: []
	W0127 03:08:23.561743  948597 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 03:08:23.561752  948597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 03:08:23.561807  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 03:08:23.621509  948597 cri.go:89] found id: ""
	I0127 03:08:23.621555  948597 logs.go:282] 0 containers: []
	W0127 03:08:23.621567  948597 logs.go:284] No container was found matching "kindnet"
	I0127 03:08:23.621576  948597 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:08:23.621658  948597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:08:23.690361  948597 cri.go:89] found id: ""
	I0127 03:08:23.690399  948597 logs.go:282] 0 containers: []
	W0127 03:08:23.690411  948597 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 03:08:23.690424  948597 logs.go:123] Gathering logs for dmesg ...
	I0127 03:08:23.690451  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:08:23.710302  948597 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:08:23.710345  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 03:08:23.816241  948597 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 03:08:23.816269  948597 logs.go:123] Gathering logs for CRI-O ...
	I0127 03:08:23.816283  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 03:08:23.931831  948597 logs.go:123] Gathering logs for container status ...
	I0127 03:08:23.931883  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 03:08:23.974187  948597 logs.go:123] Gathering logs for kubelet ...
	I0127 03:08:23.974224  948597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0127 03:08:24.036237  948597 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0127 03:08:24.036324  948597 out.go:270] * 
	W0127 03:08:24.036416  948597 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 03:08:24.036440  948597 out.go:270] * 
	W0127 03:08:24.037735  948597 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0127 03:08:24.041238  948597 out.go:201] 
	W0127 03:08:24.042360  948597 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 03:08:24.042425  948597 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0127 03:08:24.042454  948597 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0127 03:08:24.043803  948597 out.go:201] 
	I0127 03:08:20.668988  952821 main.go:141] libmachine: (auto-284111) Calling .GetIP
	I0127 03:08:20.671856  952821 main.go:141] libmachine: (auto-284111) DBG | domain auto-284111 has defined MAC address 52:54:00:d9:35:20 in network mk-auto-284111
	I0127 03:08:20.672173  952821 main.go:141] libmachine: (auto-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:35:20", ip: ""} in network mk-auto-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:08:09 +0000 UTC Type:0 Mac:52:54:00:d9:35:20 Iaid: IPaddr:192.168.61.8 Prefix:24 Hostname:auto-284111 Clientid:01:52:54:00:d9:35:20}
	I0127 03:08:20.672199  952821 main.go:141] libmachine: (auto-284111) DBG | domain auto-284111 has defined IP address 192.168.61.8 and MAC address 52:54:00:d9:35:20 in network mk-auto-284111
	I0127 03:08:20.672413  952821 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0127 03:08:20.676541  952821 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 03:08:20.688630  952821 kubeadm.go:883] updating cluster {Name:auto-284111 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:auto-284111 Namespace:default APIServerHAV
IP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.8 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 03:08:20.688749  952821 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 03:08:20.688802  952821 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 03:08:20.719859  952821 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0127 03:08:20.719930  952821 ssh_runner.go:195] Run: which lz4
	I0127 03:08:20.723930  952821 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 03:08:20.727938  952821 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 03:08:20.727975  952821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0127 03:08:22.013950  952821 crio.go:462] duration metric: took 1.290062728s to copy over tarball
	I0127 03:08:22.014036  952821 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 03:08:24.459322  952821 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.445254409s)
	I0127 03:08:24.459353  952821 crio.go:469] duration metric: took 2.445370064s to extract the tarball
	I0127 03:08:24.459363  952821 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 03:08:24.498713  952821 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 03:08:24.543732  952821 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 03:08:24.543762  952821 cache_images.go:84] Images are preloaded, skipping loading
	I0127 03:08:24.543773  952821 kubeadm.go:934] updating node { 192.168.61.8 8443 v1.32.1 crio true true} ...
	I0127 03:08:24.543902  952821 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-284111 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.8
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:auto-284111 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 03:08:24.544009  952821 ssh_runner.go:195] Run: crio config
	
	
	==> CRI-O <==
	Jan 27 03:08:25 old-k8s-version-542356 crio[627]: time="2025-01-27 03:08:25.063924354Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737947305063881539,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=57c3a306-0636-4e0e-83f4-72281905668e name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 03:08:25 old-k8s-version-542356 crio[627]: time="2025-01-27 03:08:25.064454345Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aa3a62e6-0caa-442f-a88b-578adecb811c name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:08:25 old-k8s-version-542356 crio[627]: time="2025-01-27 03:08:25.064620780Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aa3a62e6-0caa-442f-a88b-578adecb811c name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:08:25 old-k8s-version-542356 crio[627]: time="2025-01-27 03:08:25.064680515Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=aa3a62e6-0caa-442f-a88b-578adecb811c name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:08:25 old-k8s-version-542356 crio[627]: time="2025-01-27 03:08:25.097285027Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a8d44c5d-2662-44da-a157-0b45cebface4 name=/runtime.v1.RuntimeService/Version
	Jan 27 03:08:25 old-k8s-version-542356 crio[627]: time="2025-01-27 03:08:25.097403814Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a8d44c5d-2662-44da-a157-0b45cebface4 name=/runtime.v1.RuntimeService/Version
	Jan 27 03:08:25 old-k8s-version-542356 crio[627]: time="2025-01-27 03:08:25.098640480Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=80b84b83-6ddd-4a5d-8a3b-fe7ad8ee0659 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 03:08:25 old-k8s-version-542356 crio[627]: time="2025-01-27 03:08:25.099878735Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737947305099695057,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=80b84b83-6ddd-4a5d-8a3b-fe7ad8ee0659 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 03:08:25 old-k8s-version-542356 crio[627]: time="2025-01-27 03:08:25.101130438Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1661d771-34dc-4a67-ad14-e1345470f38f name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:08:25 old-k8s-version-542356 crio[627]: time="2025-01-27 03:08:25.101184394Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1661d771-34dc-4a67-ad14-e1345470f38f name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:08:25 old-k8s-version-542356 crio[627]: time="2025-01-27 03:08:25.101224645Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1661d771-34dc-4a67-ad14-e1345470f38f name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:08:25 old-k8s-version-542356 crio[627]: time="2025-01-27 03:08:25.136878964Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c50a093e-72ed-4612-91e2-3423f0584c87 name=/runtime.v1.RuntimeService/Version
	Jan 27 03:08:25 old-k8s-version-542356 crio[627]: time="2025-01-27 03:08:25.136993877Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c50a093e-72ed-4612-91e2-3423f0584c87 name=/runtime.v1.RuntimeService/Version
	Jan 27 03:08:25 old-k8s-version-542356 crio[627]: time="2025-01-27 03:08:25.138027754Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=050ceb6e-8258-449f-a791-502c9d811706 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 03:08:25 old-k8s-version-542356 crio[627]: time="2025-01-27 03:08:25.138376418Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737947305138357169,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=050ceb6e-8258-449f-a791-502c9d811706 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 03:08:25 old-k8s-version-542356 crio[627]: time="2025-01-27 03:08:25.138999037Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3016ddaa-e458-426b-8dc7-f962e5c0f119 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:08:25 old-k8s-version-542356 crio[627]: time="2025-01-27 03:08:25.139045327Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3016ddaa-e458-426b-8dc7-f962e5c0f119 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:08:25 old-k8s-version-542356 crio[627]: time="2025-01-27 03:08:25.139076831Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=3016ddaa-e458-426b-8dc7-f962e5c0f119 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:08:25 old-k8s-version-542356 crio[627]: time="2025-01-27 03:08:25.169258385Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1f14bf82-cf90-4a00-848b-9a2ae56ef128 name=/runtime.v1.RuntimeService/Version
	Jan 27 03:08:25 old-k8s-version-542356 crio[627]: time="2025-01-27 03:08:25.169328888Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1f14bf82-cf90-4a00-848b-9a2ae56ef128 name=/runtime.v1.RuntimeService/Version
	Jan 27 03:08:25 old-k8s-version-542356 crio[627]: time="2025-01-27 03:08:25.170442923Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2d237a6b-c9ca-4880-abb9-b129b0622587 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 03:08:25 old-k8s-version-542356 crio[627]: time="2025-01-27 03:08:25.170902248Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737947305170877698,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2d237a6b-c9ca-4880-abb9-b129b0622587 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 03:08:25 old-k8s-version-542356 crio[627]: time="2025-01-27 03:08:25.171310935Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=60628b2e-2ac5-4b20-b953-4cd66cfd5380 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:08:25 old-k8s-version-542356 crio[627]: time="2025-01-27 03:08:25.171378529Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=60628b2e-2ac5-4b20-b953-4cd66cfd5380 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:08:25 old-k8s-version-542356 crio[627]: time="2025-01-27 03:08:25.171411980Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=60628b2e-2ac5-4b20-b953-4cd66cfd5380 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jan27 03:00] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053509] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038521] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.063892] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.073990] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.603879] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.822966] systemd-fstab-generator[555]: Ignoring "noauto" option for root device
	[  +0.059849] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073801] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.176250] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.120814] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.231774] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +6.304285] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +0.064590] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.124064] systemd-fstab-generator[1004]: Ignoring "noauto" option for root device
	[ +12.435199] kauditd_printk_skb: 46 callbacks suppressed
	[Jan27 03:04] systemd-fstab-generator[5044]: Ignoring "noauto" option for root device
	[Jan27 03:06] systemd-fstab-generator[5323]: Ignoring "noauto" option for root device
	[  +0.074098] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 03:08:25 up 8 min,  0 users,  load average: 0.07, 0.07, 0.03
	Linux old-k8s-version-542356 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jan 27 03:08:22 old-k8s-version-542356 kubelet[5501]: internal/singleflight.(*Group).doCall(0x70c5750, 0xc000979130, 0xc0008df800, 0x23, 0xc000b8a900)
	Jan 27 03:08:22 old-k8s-version-542356 kubelet[5501]:         /usr/local/go/src/internal/singleflight/singleflight.go:95 +0x2e
	Jan 27 03:08:22 old-k8s-version-542356 kubelet[5501]: created by internal/singleflight.(*Group).DoChan
	Jan 27 03:08:22 old-k8s-version-542356 kubelet[5501]:         /usr/local/go/src/internal/singleflight/singleflight.go:88 +0x2cc
	Jan 27 03:08:22 old-k8s-version-542356 kubelet[5501]: goroutine 155 [runnable]:
	Jan 27 03:08:22 old-k8s-version-542356 kubelet[5501]: net._C2func_getaddrinfo(0xc000bc3160, 0x0, 0xc000bccc00, 0xc000122858, 0x0, 0x0, 0x0)
	Jan 27 03:08:22 old-k8s-version-542356 kubelet[5501]:         _cgo_gotypes.go:94 +0x55
	Jan 27 03:08:22 old-k8s-version-542356 kubelet[5501]: net.cgoLookupIPCNAME.func1(0xc000bc3160, 0x20, 0x20, 0xc000bccc00, 0xc000122858, 0x138, 0xc000a716a0, 0x57a492)
	Jan 27 03:08:22 old-k8s-version-542356 kubelet[5501]:         /usr/local/go/src/net/cgo_unix.go:161 +0xc5
	Jan 27 03:08:22 old-k8s-version-542356 kubelet[5501]: net.cgoLookupIPCNAME(0x48ab5d6, 0x3, 0xc0008df7d0, 0x1f, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	Jan 27 03:08:22 old-k8s-version-542356 kubelet[5501]:         /usr/local/go/src/net/cgo_unix.go:161 +0x16b
	Jan 27 03:08:22 old-k8s-version-542356 kubelet[5501]: net.cgoIPLookup(0xc000b97b00, 0x48ab5d6, 0x3, 0xc0008df7d0, 0x1f)
	Jan 27 03:08:22 old-k8s-version-542356 kubelet[5501]:         /usr/local/go/src/net/cgo_unix.go:218 +0x67
	Jan 27 03:08:22 old-k8s-version-542356 kubelet[5501]: created by net.cgoLookupIP
	Jan 27 03:08:22 old-k8s-version-542356 kubelet[5501]:         /usr/local/go/src/net/cgo_unix.go:228 +0xc7
	Jan 27 03:08:22 old-k8s-version-542356 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 27 03:08:22 old-k8s-version-542356 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 27 03:08:23 old-k8s-version-542356 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Jan 27 03:08:23 old-k8s-version-542356 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 27 03:08:23 old-k8s-version-542356 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 27 03:08:23 old-k8s-version-542356 kubelet[5540]: I0127 03:08:23.669505    5540 server.go:416] Version: v1.20.0
	Jan 27 03:08:23 old-k8s-version-542356 kubelet[5540]: I0127 03:08:23.669859    5540 server.go:837] Client rotation is on, will bootstrap in background
	Jan 27 03:08:23 old-k8s-version-542356 kubelet[5540]: I0127 03:08:23.671909    5540 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 27 03:08:23 old-k8s-version-542356 kubelet[5540]: I0127 03:08:23.673806    5540 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Jan 27 03:08:23 old-k8s-version-542356 kubelet[5540]: W0127 03:08:23.673945    5540 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-542356 -n old-k8s-version-542356
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-542356 -n old-k8s-version-542356: exit status 2 (245.964611ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-542356" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (508.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (1615.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-150897 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p default-k8s-diff-port-150897 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: signal: killed (26m53.228776862s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-150897] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20316
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20316-897624/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-897624/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "default-k8s-diff-port-150897" primary control-plane node in "default-k8s-diff-port-150897" cluster
	* Restarting existing kvm2 VM for "default-k8s-diff-port-150897" ...
	* Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-150897 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 03:05:57.003582  951018 out.go:345] Setting OutFile to fd 1 ...
	I0127 03:05:57.003715  951018 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 03:05:57.003727  951018 out.go:358] Setting ErrFile to fd 2...
	I0127 03:05:57.003733  951018 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 03:05:57.004042  951018 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-897624/.minikube/bin
	I0127 03:05:57.004809  951018 out.go:352] Setting JSON to false
	I0127 03:05:57.006360  951018 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":13700,"bootTime":1737933457,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 03:05:57.006514  951018 start.go:139] virtualization: kvm guest
	I0127 03:05:57.009014  951018 out.go:177] * [default-k8s-diff-port-150897] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 03:05:57.010651  951018 notify.go:220] Checking for updates...
	I0127 03:05:57.010683  951018 out.go:177]   - MINIKUBE_LOCATION=20316
	I0127 03:05:57.011935  951018 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 03:05:57.013218  951018 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20316-897624/kubeconfig
	I0127 03:05:57.014432  951018 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-897624/.minikube
	I0127 03:05:57.015564  951018 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 03:05:57.017146  951018 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 03:05:57.018853  951018 config.go:182] Loaded profile config "default-k8s-diff-port-150897": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 03:05:57.019252  951018 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:05:57.019304  951018 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:05:57.035994  951018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40537
	I0127 03:05:57.036551  951018 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:05:57.037430  951018 main.go:141] libmachine: Using API Version  1
	I0127 03:05:57.037455  951018 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:05:57.037883  951018 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:05:57.038164  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .DriverName
	I0127 03:05:57.038484  951018 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 03:05:57.038954  951018 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:05:57.038999  951018 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:05:57.054276  951018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33297
	I0127 03:05:57.054758  951018 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:05:57.055310  951018 main.go:141] libmachine: Using API Version  1
	I0127 03:05:57.055357  951018 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:05:57.055770  951018 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:05:57.056060  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .DriverName
	I0127 03:05:57.093451  951018 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 03:05:57.094727  951018 start.go:297] selected driver: kvm2
	I0127 03:05:57.094745  951018 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-150897 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8
s-diff-port-150897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.57 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 03:05:57.094926  951018 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 03:05:57.095687  951018 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 03:05:57.095775  951018 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20316-897624/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 03:05:57.112046  951018 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 03:05:57.112449  951018 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 03:05:57.112483  951018 cni.go:84] Creating CNI manager for ""
	I0127 03:05:57.112528  951018 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 03:05:57.112562  951018 start.go:340] cluster config:
	{Name:default-k8s-diff-port-150897 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-150897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.57 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 03:05:57.112680  951018 iso.go:125] acquiring lock: {Name:mkbbe08fa3daa2372045069667a0a52e0e34abd9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 03:05:57.114391  951018 out.go:177] * Starting "default-k8s-diff-port-150897" primary control-plane node in "default-k8s-diff-port-150897" cluster
	I0127 03:05:57.115564  951018 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 03:05:57.115602  951018 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 03:05:57.115627  951018 cache.go:56] Caching tarball of preloaded images
	I0127 03:05:57.115745  951018 preload.go:172] Found /home/jenkins/minikube-integration/20316-897624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 03:05:57.115760  951018 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 03:05:57.115898  951018 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/default-k8s-diff-port-150897/config.json ...
	I0127 03:05:57.116132  951018 start.go:360] acquireMachinesLock for default-k8s-diff-port-150897: {Name:mke71211974c740845fb9b00041df14f4e3ecd74 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 03:05:57.116183  951018 start.go:364] duration metric: took 28.464µs to acquireMachinesLock for "default-k8s-diff-port-150897"
	I0127 03:05:57.116202  951018 start.go:96] Skipping create...Using existing machine configuration
	I0127 03:05:57.116214  951018 fix.go:54] fixHost starting: 
	I0127 03:05:57.116552  951018 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:05:57.116598  951018 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:05:57.133300  951018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46303
	I0127 03:05:57.133740  951018 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:05:57.134240  951018 main.go:141] libmachine: Using API Version  1
	I0127 03:05:57.134263  951018 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:05:57.134634  951018 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:05:57.134838  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .DriverName
	I0127 03:05:57.135032  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetState
	I0127 03:05:57.136915  951018 fix.go:112] recreateIfNeeded on default-k8s-diff-port-150897: state=Stopped err=<nil>
	I0127 03:05:57.136977  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .DriverName
	W0127 03:05:57.137158  951018 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 03:05:57.139187  951018 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-150897" ...
	I0127 03:05:57.140375  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .Start
	I0127 03:05:57.140608  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) starting domain...
	I0127 03:05:57.140630  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) ensuring networks are active...
	I0127 03:05:57.141584  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Ensuring network default is active
	I0127 03:05:57.141953  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Ensuring network mk-default-k8s-diff-port-150897 is active
	I0127 03:05:57.142481  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) getting domain XML...
	I0127 03:05:57.143329  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) creating domain...
	I0127 03:05:58.526466  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) waiting for IP...
	I0127 03:05:58.527597  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has defined MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:05:58.528208  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | unable to find current IP address of domain default-k8s-diff-port-150897 in network mk-default-k8s-diff-port-150897
	I0127 03:05:58.528344  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | I0127 03:05:58.528171  951052 retry.go:31] will retry after 253.786194ms: waiting for domain to come up
	I0127 03:05:58.783972  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has defined MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:05:58.784544  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | unable to find current IP address of domain default-k8s-diff-port-150897 in network mk-default-k8s-diff-port-150897
	I0127 03:05:58.784573  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | I0127 03:05:58.784511  951052 retry.go:31] will retry after 266.735179ms: waiting for domain to come up
	I0127 03:05:59.053329  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has defined MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:05:59.053894  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | unable to find current IP address of domain default-k8s-diff-port-150897 in network mk-default-k8s-diff-port-150897
	I0127 03:05:59.053927  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | I0127 03:05:59.053882  951052 retry.go:31] will retry after 354.935674ms: waiting for domain to come up
	I0127 03:05:59.410567  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has defined MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:05:59.411100  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | unable to find current IP address of domain default-k8s-diff-port-150897 in network mk-default-k8s-diff-port-150897
	I0127 03:05:59.411140  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | I0127 03:05:59.411089  951052 retry.go:31] will retry after 447.083631ms: waiting for domain to come up
	I0127 03:05:59.859734  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has defined MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:05:59.860398  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | unable to find current IP address of domain default-k8s-diff-port-150897 in network mk-default-k8s-diff-port-150897
	I0127 03:05:59.860431  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | I0127 03:05:59.860357  951052 retry.go:31] will retry after 523.312199ms: waiting for domain to come up
	I0127 03:06:00.385055  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has defined MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:06:00.385523  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | unable to find current IP address of domain default-k8s-diff-port-150897 in network mk-default-k8s-diff-port-150897
	I0127 03:06:00.385553  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | I0127 03:06:00.385460  951052 retry.go:31] will retry after 637.343464ms: waiting for domain to come up
	I0127 03:06:01.024167  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has defined MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:06:01.024725  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | unable to find current IP address of domain default-k8s-diff-port-150897 in network mk-default-k8s-diff-port-150897
	I0127 03:06:01.024759  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | I0127 03:06:01.024674  951052 retry.go:31] will retry after 1.031721699s: waiting for domain to come up
	I0127 03:06:02.058345  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has defined MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:06:02.058891  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | unable to find current IP address of domain default-k8s-diff-port-150897 in network mk-default-k8s-diff-port-150897
	I0127 03:06:02.058936  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | I0127 03:06:02.058861  951052 retry.go:31] will retry after 976.159885ms: waiting for domain to come up
	I0127 03:06:03.036740  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has defined MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:06:03.037248  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | unable to find current IP address of domain default-k8s-diff-port-150897 in network mk-default-k8s-diff-port-150897
	I0127 03:06:03.037274  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | I0127 03:06:03.037212  951052 retry.go:31] will retry after 1.642945148s: waiting for domain to come up
	I0127 03:06:04.682006  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has defined MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:06:04.682524  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | unable to find current IP address of domain default-k8s-diff-port-150897 in network mk-default-k8s-diff-port-150897
	I0127 03:06:04.682553  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | I0127 03:06:04.682486  951052 retry.go:31] will retry after 1.733220955s: waiting for domain to come up
	I0127 03:06:06.418514  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has defined MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:06:06.419019  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | unable to find current IP address of domain default-k8s-diff-port-150897 in network mk-default-k8s-diff-port-150897
	I0127 03:06:06.419050  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | I0127 03:06:06.419003  951052 retry.go:31] will retry after 2.721150029s: waiting for domain to come up
	I0127 03:06:09.142171  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has defined MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:06:09.142770  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | unable to find current IP address of domain default-k8s-diff-port-150897 in network mk-default-k8s-diff-port-150897
	I0127 03:06:09.142806  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | I0127 03:06:09.142744  951052 retry.go:31] will retry after 3.460426366s: waiting for domain to come up
	I0127 03:06:12.605038  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has defined MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:06:12.605511  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | unable to find current IP address of domain default-k8s-diff-port-150897 in network mk-default-k8s-diff-port-150897
	I0127 03:06:12.605537  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | I0127 03:06:12.605466  951052 retry.go:31] will retry after 3.622140726s: waiting for domain to come up
	I0127 03:06:16.231899  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has defined MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:06:16.232426  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) found domain IP: 192.168.50.57
	I0127 03:06:16.232461  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has current primary IP address 192.168.50.57 and MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:06:16.232469  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) reserving static IP address...
	I0127 03:06:16.232872  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-150897", mac: "52:54:00:06:f2:51", ip: "192.168.50.57"} in network mk-default-k8s-diff-port-150897: {Iface:virbr2 ExpiryTime:2025-01-27 04:06:08 +0000 UTC Type:0 Mac:52:54:00:06:f2:51 Iaid: IPaddr:192.168.50.57 Prefix:24 Hostname:default-k8s-diff-port-150897 Clientid:01:52:54:00:06:f2:51}
	I0127 03:06:16.232912  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | skip adding static IP to network mk-default-k8s-diff-port-150897 - found existing host DHCP lease matching {name: "default-k8s-diff-port-150897", mac: "52:54:00:06:f2:51", ip: "192.168.50.57"}
	I0127 03:06:16.232952  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) reserved static IP address 192.168.50.57 for domain default-k8s-diff-port-150897
	I0127 03:06:16.232968  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | Getting to WaitForSSH function...
	I0127 03:06:16.233004  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) waiting for SSH...
	I0127 03:06:16.234973  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has defined MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:06:16.235388  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:f2:51", ip: ""} in network mk-default-k8s-diff-port-150897: {Iface:virbr2 ExpiryTime:2025-01-27 04:06:08 +0000 UTC Type:0 Mac:52:54:00:06:f2:51 Iaid: IPaddr:192.168.50.57 Prefix:24 Hostname:default-k8s-diff-port-150897 Clientid:01:52:54:00:06:f2:51}
	I0127 03:06:16.235436  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has defined IP address 192.168.50.57 and MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:06:16.235551  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | Using SSH client type: external
	I0127 03:06:16.235580  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | Using SSH private key: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/default-k8s-diff-port-150897/id_rsa (-rw-------)
	I0127 03:06:16.235626  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.57 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20316-897624/.minikube/machines/default-k8s-diff-port-150897/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 03:06:16.235649  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | About to run SSH command:
	I0127 03:06:16.235697  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | exit 0
	I0127 03:06:16.356771  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | SSH cmd err, output: <nil>: 
	I0127 03:06:16.357203  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetConfigRaw
	I0127 03:06:16.358068  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetIP
	I0127 03:06:16.361059  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has defined MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:06:16.361492  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:f2:51", ip: ""} in network mk-default-k8s-diff-port-150897: {Iface:virbr2 ExpiryTime:2025-01-27 04:06:08 +0000 UTC Type:0 Mac:52:54:00:06:f2:51 Iaid: IPaddr:192.168.50.57 Prefix:24 Hostname:default-k8s-diff-port-150897 Clientid:01:52:54:00:06:f2:51}
	I0127 03:06:16.361524  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has defined IP address 192.168.50.57 and MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:06:16.361783  951018 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/default-k8s-diff-port-150897/config.json ...
	I0127 03:06:16.362044  951018 machine.go:93] provisionDockerMachine start ...
	I0127 03:06:16.362069  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .DriverName
	I0127 03:06:16.362286  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHHostname
	I0127 03:06:16.364639  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has defined MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:06:16.365062  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:f2:51", ip: ""} in network mk-default-k8s-diff-port-150897: {Iface:virbr2 ExpiryTime:2025-01-27 04:06:08 +0000 UTC Type:0 Mac:52:54:00:06:f2:51 Iaid: IPaddr:192.168.50.57 Prefix:24 Hostname:default-k8s-diff-port-150897 Clientid:01:52:54:00:06:f2:51}
	I0127 03:06:16.365092  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has defined IP address 192.168.50.57 and MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:06:16.365234  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHPort
	I0127 03:06:16.365397  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHKeyPath
	I0127 03:06:16.365512  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHKeyPath
	I0127 03:06:16.365677  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHUsername
	I0127 03:06:16.365841  951018 main.go:141] libmachine: Using SSH client type: native
	I0127 03:06:16.366080  951018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.57 22 <nil> <nil>}
	I0127 03:06:16.366094  951018 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 03:06:16.473237  951018 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 03:06:16.473266  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetMachineName
	I0127 03:06:16.473520  951018 buildroot.go:166] provisioning hostname "default-k8s-diff-port-150897"
	I0127 03:06:16.473549  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetMachineName
	I0127 03:06:16.473729  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHHostname
	I0127 03:06:16.477158  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has defined MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:06:16.477558  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:f2:51", ip: ""} in network mk-default-k8s-diff-port-150897: {Iface:virbr2 ExpiryTime:2025-01-27 04:06:08 +0000 UTC Type:0 Mac:52:54:00:06:f2:51 Iaid: IPaddr:192.168.50.57 Prefix:24 Hostname:default-k8s-diff-port-150897 Clientid:01:52:54:00:06:f2:51}
	I0127 03:06:16.477588  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has defined IP address 192.168.50.57 and MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:06:16.478948  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHPort
	I0127 03:06:16.479149  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHKeyPath
	I0127 03:06:16.479325  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHKeyPath
	I0127 03:06:16.479461  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHUsername
	I0127 03:06:16.479665  951018 main.go:141] libmachine: Using SSH client type: native
	I0127 03:06:16.479915  951018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.57 22 <nil> <nil>}
	I0127 03:06:16.479934  951018 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-150897 && echo "default-k8s-diff-port-150897" | sudo tee /etc/hostname
	I0127 03:06:16.601081  951018 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-150897
	
	I0127 03:06:16.601115  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHHostname
	I0127 03:06:16.603999  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has defined MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:06:16.604334  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:f2:51", ip: ""} in network mk-default-k8s-diff-port-150897: {Iface:virbr2 ExpiryTime:2025-01-27 04:06:08 +0000 UTC Type:0 Mac:52:54:00:06:f2:51 Iaid: IPaddr:192.168.50.57 Prefix:24 Hostname:default-k8s-diff-port-150897 Clientid:01:52:54:00:06:f2:51}
	I0127 03:06:16.604366  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has defined IP address 192.168.50.57 and MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:06:16.604558  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHPort
	I0127 03:06:16.604754  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHKeyPath
	I0127 03:06:16.604966  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHKeyPath
	I0127 03:06:16.605109  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHUsername
	I0127 03:06:16.605268  951018 main.go:141] libmachine: Using SSH client type: native
	I0127 03:06:16.605464  951018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.57 22 <nil> <nil>}
	I0127 03:06:16.605489  951018 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-150897' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-150897/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-150897' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 03:06:16.723506  951018 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 03:06:16.723540  951018 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20316-897624/.minikube CaCertPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20316-897624/.minikube}
	I0127 03:06:16.723566  951018 buildroot.go:174] setting up certificates
	I0127 03:06:16.723581  951018 provision.go:84] configureAuth start
	I0127 03:06:16.723590  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetMachineName
	I0127 03:06:16.723856  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetIP
	I0127 03:06:16.727268  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has defined MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:06:16.727727  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:f2:51", ip: ""} in network mk-default-k8s-diff-port-150897: {Iface:virbr2 ExpiryTime:2025-01-27 04:06:08 +0000 UTC Type:0 Mac:52:54:00:06:f2:51 Iaid: IPaddr:192.168.50.57 Prefix:24 Hostname:default-k8s-diff-port-150897 Clientid:01:52:54:00:06:f2:51}
	I0127 03:06:16.727760  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has defined IP address 192.168.50.57 and MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:06:16.727943  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHHostname
	I0127 03:06:16.730502  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has defined MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:06:16.730825  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:f2:51", ip: ""} in network mk-default-k8s-diff-port-150897: {Iface:virbr2 ExpiryTime:2025-01-27 04:06:08 +0000 UTC Type:0 Mac:52:54:00:06:f2:51 Iaid: IPaddr:192.168.50.57 Prefix:24 Hostname:default-k8s-diff-port-150897 Clientid:01:52:54:00:06:f2:51}
	I0127 03:06:16.730872  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has defined IP address 192.168.50.57 and MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:06:16.730947  951018 provision.go:143] copyHostCerts
	I0127 03:06:16.731012  951018 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-897624/.minikube/ca.pem, removing ...
	I0127 03:06:16.731026  951018 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-897624/.minikube/ca.pem
	I0127 03:06:16.731093  951018 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20316-897624/.minikube/ca.pem (1078 bytes)
	I0127 03:06:16.731198  951018 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-897624/.minikube/cert.pem, removing ...
	I0127 03:06:16.731206  951018 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-897624/.minikube/cert.pem
	I0127 03:06:16.731238  951018 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20316-897624/.minikube/cert.pem (1123 bytes)
	I0127 03:06:16.731306  951018 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-897624/.minikube/key.pem, removing ...
	I0127 03:06:16.731313  951018 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-897624/.minikube/key.pem
	I0127 03:06:16.731353  951018 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20316-897624/.minikube/key.pem (1679 bytes)
	I0127 03:06:16.731448  951018 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-150897 san=[127.0.0.1 192.168.50.57 default-k8s-diff-port-150897 localhost minikube]
	I0127 03:06:16.861570  951018 provision.go:177] copyRemoteCerts
	I0127 03:06:16.861636  951018 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 03:06:16.861671  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHHostname
	I0127 03:06:16.864793  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has defined MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:06:16.865200  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:f2:51", ip: ""} in network mk-default-k8s-diff-port-150897: {Iface:virbr2 ExpiryTime:2025-01-27 04:06:08 +0000 UTC Type:0 Mac:52:54:00:06:f2:51 Iaid: IPaddr:192.168.50.57 Prefix:24 Hostname:default-k8s-diff-port-150897 Clientid:01:52:54:00:06:f2:51}
	I0127 03:06:16.865228  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has defined IP address 192.168.50.57 and MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:06:16.865576  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHPort
	I0127 03:06:16.865766  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHKeyPath
	I0127 03:06:16.865937  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHUsername
	I0127 03:06:16.866086  951018 sshutil.go:53] new ssh client: &{IP:192.168.50.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/default-k8s-diff-port-150897/id_rsa Username:docker}
	I0127 03:06:16.952302  951018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 03:06:16.975429  951018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0127 03:06:16.998996  951018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 03:06:17.022908  951018 provision.go:87] duration metric: took 299.30935ms to configureAuth
	I0127 03:06:17.022953  951018 buildroot.go:189] setting minikube options for container-runtime
	I0127 03:06:17.023242  951018 config.go:182] Loaded profile config "default-k8s-diff-port-150897": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 03:06:17.023390  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHHostname
	I0127 03:06:17.026046  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has defined MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:06:17.026475  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:f2:51", ip: ""} in network mk-default-k8s-diff-port-150897: {Iface:virbr2 ExpiryTime:2025-01-27 04:06:08 +0000 UTC Type:0 Mac:52:54:00:06:f2:51 Iaid: IPaddr:192.168.50.57 Prefix:24 Hostname:default-k8s-diff-port-150897 Clientid:01:52:54:00:06:f2:51}
	I0127 03:06:17.026504  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has defined IP address 192.168.50.57 and MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:06:17.026711  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHPort
	I0127 03:06:17.026933  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHKeyPath
	I0127 03:06:17.027150  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHKeyPath
	I0127 03:06:17.027344  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHUsername
	I0127 03:06:17.027545  951018 main.go:141] libmachine: Using SSH client type: native
	I0127 03:06:17.027787  951018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.57 22 <nil> <nil>}
	I0127 03:06:17.027813  951018 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 03:06:17.278620  951018 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 03:06:17.278650  951018 machine.go:96] duration metric: took 916.588555ms to provisionDockerMachine
	I0127 03:06:17.278665  951018 start.go:293] postStartSetup for "default-k8s-diff-port-150897" (driver="kvm2")
	I0127 03:06:17.278677  951018 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 03:06:17.278698  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .DriverName
	I0127 03:06:17.279024  951018 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 03:06:17.279061  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHHostname
	I0127 03:06:17.281963  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has defined MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:06:17.282348  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:f2:51", ip: ""} in network mk-default-k8s-diff-port-150897: {Iface:virbr2 ExpiryTime:2025-01-27 04:06:08 +0000 UTC Type:0 Mac:52:54:00:06:f2:51 Iaid: IPaddr:192.168.50.57 Prefix:24 Hostname:default-k8s-diff-port-150897 Clientid:01:52:54:00:06:f2:51}
	I0127 03:06:17.282378  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has defined IP address 192.168.50.57 and MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:06:17.282546  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHPort
	I0127 03:06:17.282740  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHKeyPath
	I0127 03:06:17.282917  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHUsername
	I0127 03:06:17.283125  951018 sshutil.go:53] new ssh client: &{IP:192.168.50.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/default-k8s-diff-port-150897/id_rsa Username:docker}
	I0127 03:06:17.368511  951018 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 03:06:17.372671  951018 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 03:06:17.372701  951018 filesync.go:126] Scanning /home/jenkins/minikube-integration/20316-897624/.minikube/addons for local assets ...
	I0127 03:06:17.372782  951018 filesync.go:126] Scanning /home/jenkins/minikube-integration/20316-897624/.minikube/files for local assets ...
	I0127 03:06:17.372906  951018 filesync.go:149] local asset: /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem -> 9048892.pem in /etc/ssl/certs
	I0127 03:06:17.373072  951018 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 03:06:17.383670  951018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem --> /etc/ssl/certs/9048892.pem (1708 bytes)
	I0127 03:06:17.407406  951018 start.go:296] duration metric: took 128.725079ms for postStartSetup
	I0127 03:06:17.407455  951018 fix.go:56] duration metric: took 20.291239835s for fixHost
	I0127 03:06:17.407486  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHHostname
	I0127 03:06:17.410544  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has defined MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:06:17.410893  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:f2:51", ip: ""} in network mk-default-k8s-diff-port-150897: {Iface:virbr2 ExpiryTime:2025-01-27 04:06:08 +0000 UTC Type:0 Mac:52:54:00:06:f2:51 Iaid: IPaddr:192.168.50.57 Prefix:24 Hostname:default-k8s-diff-port-150897 Clientid:01:52:54:00:06:f2:51}
	I0127 03:06:17.410921  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has defined IP address 192.168.50.57 and MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:06:17.411144  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHPort
	I0127 03:06:17.411361  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHKeyPath
	I0127 03:06:17.411553  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHKeyPath
	I0127 03:06:17.411696  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHUsername
	I0127 03:06:17.411913  951018 main.go:141] libmachine: Using SSH client type: native
	I0127 03:06:17.412146  951018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.57 22 <nil> <nil>}
	I0127 03:06:17.412164  951018 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 03:06:17.518368  951018 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737947177.492961121
	
	I0127 03:06:17.518396  951018 fix.go:216] guest clock: 1737947177.492961121
	I0127 03:06:17.518406  951018 fix.go:229] Guest: 2025-01-27 03:06:17.492961121 +0000 UTC Remote: 2025-01-27 03:06:17.407464715 +0000 UTC m=+20.447530561 (delta=85.496406ms)
	I0127 03:06:17.518431  951018 fix.go:200] guest clock delta is within tolerance: 85.496406ms
	I0127 03:06:17.518438  951018 start.go:83] releasing machines lock for "default-k8s-diff-port-150897", held for 20.402243775s
	I0127 03:06:17.518464  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .DriverName
	I0127 03:06:17.518741  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetIP
	I0127 03:06:17.521539  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has defined MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:06:17.521886  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:f2:51", ip: ""} in network mk-default-k8s-diff-port-150897: {Iface:virbr2 ExpiryTime:2025-01-27 04:06:08 +0000 UTC Type:0 Mac:52:54:00:06:f2:51 Iaid: IPaddr:192.168.50.57 Prefix:24 Hostname:default-k8s-diff-port-150897 Clientid:01:52:54:00:06:f2:51}
	I0127 03:06:17.521918  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has defined IP address 192.168.50.57 and MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:06:17.522085  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .DriverName
	I0127 03:06:17.522590  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .DriverName
	I0127 03:06:17.522836  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .DriverName
	I0127 03:06:17.522949  951018 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 03:06:17.522995  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHHostname
	I0127 03:06:17.523064  951018 ssh_runner.go:195] Run: cat /version.json
	I0127 03:06:17.523125  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHHostname
	I0127 03:06:17.527482  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has defined MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:06:17.527512  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has defined MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:06:17.527870  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:f2:51", ip: ""} in network mk-default-k8s-diff-port-150897: {Iface:virbr2 ExpiryTime:2025-01-27 04:06:08 +0000 UTC Type:0 Mac:52:54:00:06:f2:51 Iaid: IPaddr:192.168.50.57 Prefix:24 Hostname:default-k8s-diff-port-150897 Clientid:01:52:54:00:06:f2:51}
	I0127 03:06:17.527902  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has defined IP address 192.168.50.57 and MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:06:17.528047  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHPort
	I0127 03:06:17.528217  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHKeyPath
	I0127 03:06:17.528379  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHUsername
	I0127 03:06:17.528515  951018 sshutil.go:53] new ssh client: &{IP:192.168.50.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/default-k8s-diff-port-150897/id_rsa Username:docker}
	I0127 03:06:17.528836  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:f2:51", ip: ""} in network mk-default-k8s-diff-port-150897: {Iface:virbr2 ExpiryTime:2025-01-27 04:06:08 +0000 UTC Type:0 Mac:52:54:00:06:f2:51 Iaid: IPaddr:192.168.50.57 Prefix:24 Hostname:default-k8s-diff-port-150897 Clientid:01:52:54:00:06:f2:51}
	I0127 03:06:17.528864  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has defined IP address 192.168.50.57 and MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:06:17.529249  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHPort
	I0127 03:06:17.529435  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHKeyPath
	I0127 03:06:17.529609  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHUsername
	I0127 03:06:17.529752  951018 sshutil.go:53] new ssh client: &{IP:192.168.50.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/default-k8s-diff-port-150897/id_rsa Username:docker}
	I0127 03:06:17.605739  951018 ssh_runner.go:195] Run: systemctl --version
	I0127 03:06:17.633022  951018 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 03:06:17.778958  951018 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 03:06:17.785715  951018 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 03:06:17.785776  951018 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 03:06:17.802474  951018 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 03:06:17.802509  951018 start.go:495] detecting cgroup driver to use...
	I0127 03:06:17.802579  951018 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 03:06:17.820346  951018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 03:06:17.834043  951018 docker.go:217] disabling cri-docker service (if available) ...
	I0127 03:06:17.834117  951018 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 03:06:17.847617  951018 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 03:06:17.861329  951018 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 03:06:17.988566  951018 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 03:06:18.156188  951018 docker.go:233] disabling docker service ...
	I0127 03:06:18.156268  951018 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 03:06:18.172432  951018 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 03:06:18.187390  951018 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 03:06:18.359769  951018 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 03:06:18.499713  951018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 03:06:18.513696  951018 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 03:06:18.533968  951018 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 03:06:18.534041  951018 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:06:18.545176  951018 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 03:06:18.545266  951018 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:06:18.557152  951018 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:06:18.567861  951018 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:06:18.578567  951018 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 03:06:18.589696  951018 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:06:18.600243  951018 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:06:18.617461  951018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:06:18.628197  951018 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 03:06:18.638139  951018 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 03:06:18.638206  951018 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 03:06:18.653163  951018 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 03:06:18.671242  951018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 03:06:18.804766  951018 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 03:06:18.903799  951018 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 03:06:18.903907  951018 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 03:06:18.910103  951018 start.go:563] Will wait 60s for crictl version
	I0127 03:06:18.910193  951018 ssh_runner.go:195] Run: which crictl
	I0127 03:06:18.914231  951018 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 03:06:18.953728  951018 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 03:06:18.953823  951018 ssh_runner.go:195] Run: crio --version
	I0127 03:06:18.991456  951018 ssh_runner.go:195] Run: crio --version
	I0127 03:06:19.026835  951018 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 03:06:19.028128  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetIP
	I0127 03:06:19.031606  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has defined MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:06:19.032013  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:f2:51", ip: ""} in network mk-default-k8s-diff-port-150897: {Iface:virbr2 ExpiryTime:2025-01-27 04:06:08 +0000 UTC Type:0 Mac:52:54:00:06:f2:51 Iaid: IPaddr:192.168.50.57 Prefix:24 Hostname:default-k8s-diff-port-150897 Clientid:01:52:54:00:06:f2:51}
	I0127 03:06:19.032045  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has defined IP address 192.168.50.57 and MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:06:19.032227  951018 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0127 03:06:19.036398  951018 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 03:06:19.049734  951018 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-150897 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-150
897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.57 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 03:06:19.049889  951018 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 03:06:19.049951  951018 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 03:06:19.088061  951018 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0127 03:06:19.088132  951018 ssh_runner.go:195] Run: which lz4
	I0127 03:06:19.092079  951018 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 03:06:19.096219  951018 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 03:06:19.096246  951018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0127 03:06:20.406663  951018 crio.go:462] duration metric: took 1.314607462s to copy over tarball
	I0127 03:06:20.406774  951018 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 03:06:22.660353  951018 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.253547952s)
	I0127 03:06:22.660387  951018 crio.go:469] duration metric: took 2.253685477s to extract the tarball
	I0127 03:06:22.660398  951018 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 03:06:22.698066  951018 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 03:06:22.742221  951018 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 03:06:22.742254  951018 cache_images.go:84] Images are preloaded, skipping loading
	I0127 03:06:22.742273  951018 kubeadm.go:934] updating node { 192.168.50.57 8444 v1.32.1 crio true true} ...
	I0127 03:06:22.742425  951018 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-150897 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.57
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-150897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 03:06:22.742527  951018 ssh_runner.go:195] Run: crio config
	I0127 03:06:22.793829  951018 cni.go:84] Creating CNI manager for ""
	I0127 03:06:22.793855  951018 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 03:06:22.793866  951018 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 03:06:22.793889  951018 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.57 APIServerPort:8444 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-150897 NodeName:default-k8s-diff-port-150897 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.57"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.57 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 03:06:22.794010  951018 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.57
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-150897"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.57"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.57"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 03:06:22.794108  951018 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 03:06:22.804409  951018 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 03:06:22.804484  951018 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 03:06:22.814092  951018 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0127 03:06:22.832187  951018 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 03:06:22.854632  951018 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0127 03:06:22.872701  951018 ssh_runner.go:195] Run: grep 192.168.50.57	control-plane.minikube.internal$ /etc/hosts
	I0127 03:06:22.877038  951018 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.57	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 03:06:22.890595  951018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 03:06:23.013942  951018 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 03:06:23.030505  951018 certs.go:68] Setting up /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/default-k8s-diff-port-150897 for IP: 192.168.50.57
	I0127 03:06:23.030543  951018 certs.go:194] generating shared ca certs ...
	I0127 03:06:23.030573  951018 certs.go:226] acquiring lock for ca certs: {Name:mk2a0a86c4d196cb3cdcd1c836209954479044e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:06:23.030797  951018 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20316-897624/.minikube/ca.key
	I0127 03:06:23.030861  951018 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20316-897624/.minikube/proxy-client-ca.key
	I0127 03:06:23.030876  951018 certs.go:256] generating profile certs ...
	I0127 03:06:23.031023  951018 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/default-k8s-diff-port-150897/client.key
	I0127 03:06:23.031110  951018 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/default-k8s-diff-port-150897/apiserver.key.db298c73
	I0127 03:06:23.031173  951018 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/default-k8s-diff-port-150897/proxy-client.key
	I0127 03:06:23.031333  951018 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/904889.pem (1338 bytes)
	W0127 03:06:23.031377  951018 certs.go:480] ignoring /home/jenkins/minikube-integration/20316-897624/.minikube/certs/904889_empty.pem, impossibly tiny 0 bytes
	I0127 03:06:23.031391  951018 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 03:06:23.031426  951018 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem (1078 bytes)
	I0127 03:06:23.031461  951018 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/cert.pem (1123 bytes)
	I0127 03:06:23.031557  951018 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/key.pem (1679 bytes)
	I0127 03:06:23.031624  951018 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem (1708 bytes)
	I0127 03:06:23.032470  951018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 03:06:23.067292  951018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 03:06:23.114909  951018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 03:06:23.161751  951018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 03:06:23.203687  951018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/default-k8s-diff-port-150897/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0127 03:06:23.236474  951018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/default-k8s-diff-port-150897/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 03:06:23.262580  951018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/default-k8s-diff-port-150897/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 03:06:23.286630  951018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/default-k8s-diff-port-150897/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 03:06:23.313180  951018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 03:06:23.339295  951018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/certs/904889.pem --> /usr/share/ca-certificates/904889.pem (1338 bytes)
	I0127 03:06:23.370932  951018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem --> /usr/share/ca-certificates/9048892.pem (1708 bytes)
	I0127 03:06:23.399363  951018 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 03:06:23.417495  951018 ssh_runner.go:195] Run: openssl version
	I0127 03:06:23.423919  951018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9048892.pem && ln -fs /usr/share/ca-certificates/9048892.pem /etc/ssl/certs/9048892.pem"
	I0127 03:06:23.435072  951018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9048892.pem
	I0127 03:06:23.439824  951018 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 01:57 /usr/share/ca-certificates/9048892.pem
	I0127 03:06:23.439889  951018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9048892.pem
	I0127 03:06:23.446000  951018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9048892.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 03:06:23.456599  951018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 03:06:23.466967  951018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 03:06:23.471407  951018 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 01:48 /usr/share/ca-certificates/minikubeCA.pem
	I0127 03:06:23.471472  951018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 03:06:23.477663  951018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 03:06:23.489264  951018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/904889.pem && ln -fs /usr/share/ca-certificates/904889.pem /etc/ssl/certs/904889.pem"
	I0127 03:06:23.500595  951018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/904889.pem
	I0127 03:06:23.505529  951018 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 01:57 /usr/share/ca-certificates/904889.pem
	I0127 03:06:23.505585  951018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/904889.pem
	I0127 03:06:23.511628  951018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/904889.pem /etc/ssl/certs/51391683.0"
	I0127 03:06:23.523207  951018 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 03:06:23.527904  951018 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 03:06:23.534127  951018 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 03:06:23.541613  951018 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 03:06:23.547411  951018 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 03:06:23.552850  951018 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 03:06:23.558579  951018 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 03:06:23.564178  951018 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-150897 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-150897
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.57 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpi
ration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 03:06:23.564305  951018 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 03:06:23.564386  951018 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 03:06:23.604624  951018 cri.go:89] found id: ""
	I0127 03:06:23.604706  951018 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 03:06:23.615219  951018 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 03:06:23.615245  951018 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 03:06:23.615318  951018 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 03:06:23.625361  951018 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 03:06:23.626200  951018 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-150897" does not appear in /home/jenkins/minikube-integration/20316-897624/kubeconfig
	I0127 03:06:23.626539  951018 kubeconfig.go:62] /home/jenkins/minikube-integration/20316-897624/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-150897" cluster setting kubeconfig missing "default-k8s-diff-port-150897" context setting]
	I0127 03:06:23.627042  951018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/kubeconfig: {Name:mkcd87672341028e989830590061bc5270733978 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:06:23.629068  951018 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 03:06:23.638776  951018 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.57
	I0127 03:06:23.638813  951018 kubeadm.go:1160] stopping kube-system containers ...
	I0127 03:06:23.638829  951018 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0127 03:06:23.638887  951018 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 03:06:23.675282  951018 cri.go:89] found id: ""
	I0127 03:06:23.675365  951018 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 03:06:23.693043  951018 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 03:06:23.706120  951018 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 03:06:23.706152  951018 kubeadm.go:157] found existing configuration files:
	
	I0127 03:06:23.706211  951018 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0127 03:06:23.719359  951018 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 03:06:23.719452  951018 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 03:06:23.729267  951018 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0127 03:06:23.738671  951018 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 03:06:23.738743  951018 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 03:06:23.748957  951018 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0127 03:06:23.758276  951018 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 03:06:23.758357  951018 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 03:06:23.767558  951018 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0127 03:06:23.776547  951018 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 03:06:23.776614  951018 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 03:06:23.786601  951018 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 03:06:23.796024  951018 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 03:06:23.920706  951018 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 03:06:25.289378  951018 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.368633883s)
	I0127 03:06:25.289412  951018 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 03:06:25.499855  951018 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 03:06:25.566961  951018 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 03:06:25.658708  951018 api_server.go:52] waiting for apiserver process to appear ...
	I0127 03:06:25.658837  951018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:06:26.158890  951018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:06:26.659533  951018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:06:26.679782  951018 api_server.go:72] duration metric: took 1.021074236s to wait for apiserver process to appear ...
	I0127 03:06:26.679815  951018 api_server.go:88] waiting for apiserver healthz status ...
	I0127 03:06:26.679858  951018 api_server.go:253] Checking apiserver healthz at https://192.168.50.57:8444/healthz ...
	I0127 03:06:26.680441  951018 api_server.go:269] stopped: https://192.168.50.57:8444/healthz: Get "https://192.168.50.57:8444/healthz": dial tcp 192.168.50.57:8444: connect: connection refused
	I0127 03:06:27.180284  951018 api_server.go:253] Checking apiserver healthz at https://192.168.50.57:8444/healthz ...
	I0127 03:06:29.602429  951018 api_server.go:279] https://192.168.50.57:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 03:06:29.602463  951018 api_server.go:103] status: https://192.168.50.57:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 03:06:29.602487  951018 api_server.go:253] Checking apiserver healthz at https://192.168.50.57:8444/healthz ...
	I0127 03:06:29.636856  951018 api_server.go:279] https://192.168.50.57:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 03:06:29.636899  951018 api_server.go:103] status: https://192.168.50.57:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 03:06:29.680151  951018 api_server.go:253] Checking apiserver healthz at https://192.168.50.57:8444/healthz ...
	I0127 03:06:29.706245  951018 api_server.go:279] https://192.168.50.57:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 03:06:29.706286  951018 api_server.go:103] status: https://192.168.50.57:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 03:06:30.179914  951018 api_server.go:253] Checking apiserver healthz at https://192.168.50.57:8444/healthz ...
	I0127 03:06:30.184852  951018 api_server.go:279] https://192.168.50.57:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 03:06:30.184883  951018 api_server.go:103] status: https://192.168.50.57:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 03:06:30.680600  951018 api_server.go:253] Checking apiserver healthz at https://192.168.50.57:8444/healthz ...
	I0127 03:06:30.691995  951018 api_server.go:279] https://192.168.50.57:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 03:06:30.692036  951018 api_server.go:103] status: https://192.168.50.57:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 03:06:31.180787  951018 api_server.go:253] Checking apiserver healthz at https://192.168.50.57:8444/healthz ...
	I0127 03:06:31.186856  951018 api_server.go:279] https://192.168.50.57:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 03:06:31.186900  951018 api_server.go:103] status: https://192.168.50.57:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 03:06:31.680603  951018 api_server.go:253] Checking apiserver healthz at https://192.168.50.57:8444/healthz ...
	I0127 03:06:31.691720  951018 api_server.go:279] https://192.168.50.57:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 03:06:31.691751  951018 api_server.go:103] status: https://192.168.50.57:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 03:06:32.180356  951018 api_server.go:253] Checking apiserver healthz at https://192.168.50.57:8444/healthz ...
	I0127 03:06:32.186236  951018 api_server.go:279] https://192.168.50.57:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 03:06:32.186267  951018 api_server.go:103] status: https://192.168.50.57:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 03:06:32.680845  951018 api_server.go:253] Checking apiserver healthz at https://192.168.50.57:8444/healthz ...
	I0127 03:06:32.686762  951018 api_server.go:279] https://192.168.50.57:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 03:06:32.686792  951018 api_server.go:103] status: https://192.168.50.57:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 03:06:33.179967  951018 api_server.go:253] Checking apiserver healthz at https://192.168.50.57:8444/healthz ...
	I0127 03:06:33.185347  951018 api_server.go:279] https://192.168.50.57:8444/healthz returned 200:
	ok
	I0127 03:06:33.192348  951018 api_server.go:141] control plane version: v1.32.1
	I0127 03:06:33.192382  951018 api_server.go:131] duration metric: took 6.512544726s to wait for apiserver health ...
	I0127 03:06:33.192392  951018 cni.go:84] Creating CNI manager for ""
	I0127 03:06:33.192399  951018 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 03:06:33.194436  951018 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 03:06:33.195716  951018 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 03:06:33.206494  951018 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 03:06:33.225646  951018 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 03:06:33.235376  951018 system_pods.go:59] 8 kube-system pods found
	I0127 03:06:33.235419  951018 system_pods.go:61] "coredns-668d6bf9bc-x5wqt" [23a02485-9996-48bb-b6c4-92534ae12676] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 03:06:33.235429  951018 system_pods.go:61] "etcd-default-k8s-diff-port-150897" [718be04d-eb92-42c3-bf59-f536d1f28f2c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 03:06:33.235437  951018 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-150897" [b3186476-5c81-44fd-8e1a-d70763f5a480] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 03:06:33.235443  951018 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-150897" [26ea5eba-3b29-4569-a318-ee8dce8d9789] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 03:06:33.235448  951018 system_pods.go:61] "kube-proxy-sptdr" [2df54281-e27b-40ff-849a-6b90515267e9] Running
	I0127 03:06:33.235453  951018 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-150897" [f31952ed-4ce7-4de3-8968-ba01c630dfb8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 03:06:33.235458  951018 system_pods.go:61] "metrics-server-f79f97bbb-6k8x4" [8fe54239-bb85-45a1-afba-86600723b4d2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 03:06:33.235462  951018 system_pods.go:61] "storage-provisioner" [c8a0e5ed-59db-4a93-ab72-2659e7016778] Running
	I0127 03:06:33.235468  951018 system_pods.go:74] duration metric: took 9.792722ms to wait for pod list to return data ...
	I0127 03:06:33.235474  951018 node_conditions.go:102] verifying NodePressure condition ...
	I0127 03:06:33.239095  951018 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 03:06:33.239133  951018 node_conditions.go:123] node cpu capacity is 2
	I0127 03:06:33.239149  951018 node_conditions.go:105] duration metric: took 3.669443ms to run NodePressure ...
	I0127 03:06:33.239170  951018 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 03:06:33.519878  951018 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0127 03:06:33.524605  951018 kubeadm.go:739] kubelet initialised
	I0127 03:06:33.524628  951018 kubeadm.go:740] duration metric: took 4.718587ms waiting for restarted kubelet to initialise ...
	I0127 03:06:33.524637  951018 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:06:33.531422  951018 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-x5wqt" in "kube-system" namespace to be "Ready" ...
	I0127 03:06:35.536992  951018 pod_ready.go:103] pod "coredns-668d6bf9bc-x5wqt" in "kube-system" namespace has status "Ready":"False"
	I0127 03:06:37.539291  951018 pod_ready.go:103] pod "coredns-668d6bf9bc-x5wqt" in "kube-system" namespace has status "Ready":"False"
	I0127 03:06:39.037349  951018 pod_ready.go:93] pod "coredns-668d6bf9bc-x5wqt" in "kube-system" namespace has status "Ready":"True"
	I0127 03:06:39.037378  951018 pod_ready.go:82] duration metric: took 5.505929077s for pod "coredns-668d6bf9bc-x5wqt" in "kube-system" namespace to be "Ready" ...
	I0127 03:06:39.037388  951018 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-150897" in "kube-system" namespace to be "Ready" ...
	I0127 03:06:39.041511  951018 pod_ready.go:93] pod "etcd-default-k8s-diff-port-150897" in "kube-system" namespace has status "Ready":"True"
	I0127 03:06:39.041533  951018 pod_ready.go:82] duration metric: took 4.139829ms for pod "etcd-default-k8s-diff-port-150897" in "kube-system" namespace to be "Ready" ...
	I0127 03:06:39.041542  951018 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-150897" in "kube-system" namespace to be "Ready" ...
	I0127 03:06:40.548052  951018 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-150897" in "kube-system" namespace has status "Ready":"True"
	I0127 03:06:40.548077  951018 pod_ready.go:82] duration metric: took 1.506529055s for pod "kube-apiserver-default-k8s-diff-port-150897" in "kube-system" namespace to be "Ready" ...
	I0127 03:06:40.548087  951018 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-150897" in "kube-system" namespace to be "Ready" ...
	I0127 03:06:40.552331  951018 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-150897" in "kube-system" namespace has status "Ready":"True"
	I0127 03:06:40.552350  951018 pod_ready.go:82] duration metric: took 4.255809ms for pod "kube-controller-manager-default-k8s-diff-port-150897" in "kube-system" namespace to be "Ready" ...
	I0127 03:06:40.552359  951018 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-sptdr" in "kube-system" namespace to be "Ready" ...
	I0127 03:06:40.556662  951018 pod_ready.go:93] pod "kube-proxy-sptdr" in "kube-system" namespace has status "Ready":"True"
	I0127 03:06:40.556685  951018 pod_ready.go:82] duration metric: took 4.318416ms for pod "kube-proxy-sptdr" in "kube-system" namespace to be "Ready" ...
	I0127 03:06:40.556696  951018 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-150897" in "kube-system" namespace to be "Ready" ...
	I0127 03:06:42.563877  951018 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-150897" in "kube-system" namespace has status "Ready":"False"
	I0127 03:06:45.064100  951018 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-150897" in "kube-system" namespace has status "Ready":"True"
	I0127 03:06:45.064127  951018 pod_ready.go:82] duration metric: took 4.5074236s for pod "kube-scheduler-default-k8s-diff-port-150897" in "kube-system" namespace to be "Ready" ...
	I0127 03:06:45.064140  951018 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace to be "Ready" ...
	I0127 03:06:47.070908  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:06:49.071782  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:06:51.571545  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:06:53.571867  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:06:56.070732  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:06:58.071148  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:07:00.570225  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:07:03.071413  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:07:05.071587  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:07:07.071843  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:07:09.570917  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:07:11.571617  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:07:14.070758  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:07:16.072041  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:07:18.570561  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:07:20.571269  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:07:23.070853  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:07:25.072094  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:07:27.570636  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:07:29.570692  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:07:32.070327  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:07:34.071308  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:07:36.570533  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:07:38.571307  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:07:41.071323  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:07:43.570181  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:07:45.572509  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:07:48.070779  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:07:50.071018  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:07:52.572438  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:07:55.076733  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:07:57.570108  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:07:59.570300  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:08:02.072663  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:08:04.570416  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:08:06.570994  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:08:08.571244  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:08:11.070289  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:08:13.570517  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:08:15.570755  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:08:18.069881  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:08:20.071334  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:08:22.071572  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:08:24.071671  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:08:26.071830  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:08:28.071984  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:08:30.571001  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:08:32.571514  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:08:34.571784  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:08:37.070952  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:08:39.570176  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:08:41.572121  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:08:44.069584  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:08:46.070357  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:08:48.571213  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:08:51.071405  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:08:53.572077  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:08:56.071558  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:08:58.571048  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:09:01.070608  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:09:03.571231  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:09:06.069914  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:09:08.070325  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:09:10.072392  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:09:12.569980  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:09:14.570897  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:09:17.070444  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:09:19.572321  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:09:22.070849  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:09:24.072188  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:09:26.571562  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:09:29.072374  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:09:31.571744  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:09:34.070639  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:09:36.071032  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:09:38.072006  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:09:40.571288  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:09:43.070490  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:09:45.072013  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:09:47.570203  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:09:49.570474  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:09:52.070349  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:09:54.570522  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:09:56.570839  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:09:59.069922  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:10:01.071094  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:10:03.072596  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:10:05.571492  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:10:08.071234  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:10:10.570345  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:10:12.571733  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:10:15.070739  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:10:17.071745  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:10:19.570073  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:10:22.070656  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:10:24.569910  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:10:26.571116  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:10:29.070331  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:10:31.071142  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:10:33.570353  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:10:35.571613  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:10:38.071334  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:10:40.571079  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:10:43.071355  951018 pod_ready.go:103] pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace has status "Ready":"False"
	I0127 03:10:45.064945  951018 pod_ready.go:82] duration metric: took 4m0.000745986s for pod "metrics-server-f79f97bbb-6k8x4" in "kube-system" namespace to be "Ready" ...
	E0127 03:10:45.065000  951018 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0127 03:10:45.065028  951018 pod_ready.go:39] duration metric: took 4m11.540380627s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:10:45.065069  951018 kubeadm.go:597] duration metric: took 4m21.449817194s to restartPrimaryControlPlane
	W0127 03:10:45.065156  951018 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 03:10:45.065191  951018 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 03:11:12.886660  951018 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.821434258s)
	I0127 03:11:12.886775  951018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 03:11:12.905846  951018 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 03:11:12.921384  951018 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 03:11:12.942732  951018 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 03:11:12.942757  951018 kubeadm.go:157] found existing configuration files:
	
	I0127 03:11:12.942825  951018 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0127 03:11:12.961630  951018 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 03:11:12.961707  951018 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 03:11:12.982375  951018 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0127 03:11:12.996297  951018 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 03:11:12.996363  951018 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 03:11:13.016724  951018 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0127 03:11:13.026237  951018 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 03:11:13.026322  951018 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 03:11:13.035767  951018 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0127 03:11:13.046127  951018 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 03:11:13.046206  951018 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 03:11:13.056781  951018 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 03:11:13.106542  951018 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 03:11:13.106709  951018 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 03:11:13.225004  951018 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 03:11:13.225177  951018 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 03:11:13.225346  951018 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 03:11:13.235871  951018 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 03:11:13.237828  951018 out.go:235]   - Generating certificates and keys ...
	I0127 03:11:13.237942  951018 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 03:11:13.239514  951018 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 03:11:13.239630  951018 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 03:11:13.239713  951018 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 03:11:13.239808  951018 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 03:11:13.239884  951018 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 03:11:13.239969  951018 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 03:11:13.240056  951018 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 03:11:13.240169  951018 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 03:11:13.240263  951018 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 03:11:13.240317  951018 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 03:11:13.240394  951018 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 03:11:13.346211  951018 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 03:11:13.463336  951018 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 03:11:13.511674  951018 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 03:11:13.621096  951018 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 03:11:13.750112  951018 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 03:11:13.750801  951018 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 03:11:13.753783  951018 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 03:11:13.755742  951018 out.go:235]   - Booting up control plane ...
	I0127 03:11:13.755861  951018 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 03:11:13.755963  951018 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 03:11:13.756052  951018 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 03:11:13.776614  951018 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 03:11:13.787191  951018 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 03:11:13.787269  951018 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 03:11:13.923977  951018 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 03:11:13.924134  951018 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 03:11:14.440473  951018 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 516.21428ms
	I0127 03:11:14.440582  951018 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 03:11:19.442791  951018 kubeadm.go:310] [api-check] The API server is healthy after 5.002175418s
	I0127 03:11:19.458932  951018 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 03:11:19.479049  951018 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 03:11:19.518621  951018 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 03:11:19.518934  951018 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-150897 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 03:11:19.532806  951018 kubeadm.go:310] [bootstrap-token] Using token: 91ep5i.nysgdpdxy8mfy0gw
	I0127 03:11:19.534037  951018 out.go:235]   - Configuring RBAC rules ...
	I0127 03:11:19.534202  951018 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 03:11:19.541911  951018 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 03:11:19.555143  951018 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 03:11:19.559127  951018 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 03:11:19.565397  951018 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 03:11:19.571113  951018 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 03:11:19.850027  951018 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 03:11:20.288157  951018 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 03:11:20.852476  951018 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 03:11:20.852510  951018 kubeadm.go:310] 
	I0127 03:11:20.852593  951018 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 03:11:20.852603  951018 kubeadm.go:310] 
	I0127 03:11:20.852723  951018 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 03:11:20.852733  951018 kubeadm.go:310] 
	I0127 03:11:20.852767  951018 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 03:11:20.852889  951018 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 03:11:20.852999  951018 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 03:11:20.853012  951018 kubeadm.go:310] 
	I0127 03:11:20.853094  951018 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 03:11:20.853105  951018 kubeadm.go:310] 
	I0127 03:11:20.853173  951018 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 03:11:20.853184  951018 kubeadm.go:310] 
	I0127 03:11:20.853252  951018 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 03:11:20.853374  951018 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 03:11:20.853497  951018 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 03:11:20.853528  951018 kubeadm.go:310] 
	I0127 03:11:20.853671  951018 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 03:11:20.853790  951018 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 03:11:20.853815  951018 kubeadm.go:310] 
	I0127 03:11:20.853934  951018 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 91ep5i.nysgdpdxy8mfy0gw \
	I0127 03:11:20.854084  951018 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9e03cefec8c3986c8071c28cf6fb7c7bbd45830c579dcdbae8e6ec1676091320 \
	I0127 03:11:20.854116  951018 kubeadm.go:310] 	--control-plane 
	I0127 03:11:20.854148  951018 kubeadm.go:310] 
	I0127 03:11:20.854272  951018 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 03:11:20.854292  951018 kubeadm.go:310] 
	I0127 03:11:20.854413  951018 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 91ep5i.nysgdpdxy8mfy0gw \
	I0127 03:11:20.854569  951018 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9e03cefec8c3986c8071c28cf6fb7c7bbd45830c579dcdbae8e6ec1676091320 
	I0127 03:11:20.855218  951018 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 03:11:20.855274  951018 cni.go:84] Creating CNI manager for ""
	I0127 03:11:20.855287  951018 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 03:11:20.857634  951018 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 03:11:20.858680  951018 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 03:11:20.872030  951018 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 03:11:20.891578  951018 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 03:11:20.891669  951018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:11:20.891681  951018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-150897 minikube.k8s.io/updated_at=2025_01_27T03_11_20_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=6bb462d349d93b9bf1c5a4f87817e5e9ea11cc95 minikube.k8s.io/name=default-k8s-diff-port-150897 minikube.k8s.io/primary=true
	I0127 03:11:20.909161  951018 ops.go:34] apiserver oom_adj: -16
	I0127 03:11:21.124267  951018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:11:21.625321  951018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:11:22.124767  951018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:11:22.625081  951018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:11:23.125215  951018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:11:23.625305  951018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:11:24.124811  951018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:11:24.625032  951018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:11:25.125105  951018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:11:25.231053  951018 kubeadm.go:1113] duration metric: took 4.339461051s to wait for elevateKubeSystemPrivileges
	I0127 03:11:25.231092  951018 kubeadm.go:394] duration metric: took 5m1.666927061s to StartCluster
	I0127 03:11:25.231116  951018 settings.go:142] acquiring lock: {Name:mk329e04da29656a59534015ea2d4cccfe5debac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:11:25.231213  951018 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20316-897624/kubeconfig
	I0127 03:11:25.232283  951018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/kubeconfig: {Name:mkcd87672341028e989830590061bc5270733978 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:11:25.232529  951018 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.57 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 03:11:25.232627  951018 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 03:11:25.232737  951018 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-150897"
	I0127 03:11:25.232746  951018 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-150897"
	I0127 03:11:25.232771  951018 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-150897"
	I0127 03:11:25.232780  951018 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-150897"
	I0127 03:11:25.232779  951018 config.go:182] Loaded profile config "default-k8s-diff-port-150897": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 03:11:25.232791  951018 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-150897"
	W0127 03:11:25.232799  951018 addons.go:247] addon dashboard should already be in state true
	W0127 03:11:25.232785  951018 addons.go:247] addon storage-provisioner should already be in state true
	I0127 03:11:25.232819  951018 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-150897"
	I0127 03:11:25.232838  951018 host.go:66] Checking if "default-k8s-diff-port-150897" exists ...
	I0127 03:11:25.232846  951018 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-150897"
	W0127 03:11:25.232856  951018 addons.go:247] addon metrics-server should already be in state true
	I0127 03:11:25.232882  951018 host.go:66] Checking if "default-k8s-diff-port-150897" exists ...
	I0127 03:11:25.232895  951018 host.go:66] Checking if "default-k8s-diff-port-150897" exists ...
	I0127 03:11:25.232771  951018 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-150897"
	I0127 03:11:25.233348  951018 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:11:25.233358  951018 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:11:25.233371  951018 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:11:25.233380  951018 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:11:25.233397  951018 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:11:25.233413  951018 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:11:25.233424  951018 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:11:25.233496  951018 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:11:25.234516  951018 out.go:177] * Verifying Kubernetes components...
	I0127 03:11:25.235797  951018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 03:11:25.249912  951018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37719
	I0127 03:11:25.250059  951018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40111
	I0127 03:11:25.250586  951018 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:11:25.250737  951018 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:11:25.251238  951018 main.go:141] libmachine: Using API Version  1
	I0127 03:11:25.251258  951018 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:11:25.251401  951018 main.go:141] libmachine: Using API Version  1
	I0127 03:11:25.251418  951018 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:11:25.251618  951018 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:11:25.251791  951018 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:11:25.252253  951018 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:11:25.252299  951018 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:11:25.252381  951018 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:11:25.252429  951018 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:11:25.252529  951018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46649
	I0127 03:11:25.252721  951018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44771
	I0127 03:11:25.252900  951018 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:11:25.253160  951018 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:11:25.253396  951018 main.go:141] libmachine: Using API Version  1
	I0127 03:11:25.253419  951018 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:11:25.253752  951018 main.go:141] libmachine: Using API Version  1
	I0127 03:11:25.253770  951018 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:11:25.253773  951018 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:11:25.254167  951018 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:11:25.254340  951018 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:11:25.254379  951018 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:11:25.254538  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetState
	I0127 03:11:25.257992  951018 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-150897"
	W0127 03:11:25.258019  951018 addons.go:247] addon default-storageclass should already be in state true
	I0127 03:11:25.258048  951018 host.go:66] Checking if "default-k8s-diff-port-150897" exists ...
	I0127 03:11:25.258437  951018 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:11:25.258481  951018 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:11:25.276823  951018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34617
	I0127 03:11:25.276845  951018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43379
	I0127 03:11:25.276823  951018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34269
	I0127 03:11:25.276823  951018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34637
	I0127 03:11:25.277564  951018 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:11:25.277590  951018 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:11:25.277569  951018 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:11:25.278136  951018 main.go:141] libmachine: Using API Version  1
	I0127 03:11:25.278157  951018 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:11:25.278294  951018 main.go:141] libmachine: Using API Version  1
	I0127 03:11:25.278313  951018 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:11:25.278497  951018 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:11:25.278635  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetState
	I0127 03:11:25.278682  951018 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:11:25.278900  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetState
	I0127 03:11:25.279859  951018 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:11:25.280207  951018 main.go:141] libmachine: Using API Version  1
	I0127 03:11:25.280225  951018 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:11:25.280589  951018 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:11:25.280733  951018 main.go:141] libmachine: Using API Version  1
	I0127 03:11:25.280744  951018 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:11:25.281140  951018 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:11:25.281233  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .DriverName
	I0127 03:11:25.281439  951018 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:11:25.281480  951018 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:11:25.281489  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetState
	I0127 03:11:25.281621  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .DriverName
	I0127 03:11:25.283123  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .DriverName
	I0127 03:11:25.283178  951018 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 03:11:25.283219  951018 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 03:11:25.284624  951018 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 03:11:25.284734  951018 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 03:11:25.284762  951018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 03:11:25.284789  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHHostname
	I0127 03:11:25.285754  951018 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 03:11:25.285883  951018 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 03:11:25.285900  951018 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 03:11:25.285931  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHHostname
	I0127 03:11:25.286827  951018 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 03:11:25.286854  951018 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 03:11:25.286875  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHHostname
	I0127 03:11:25.290037  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has defined MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:11:25.290051  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has defined MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:11:25.290085  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:f2:51", ip: ""} in network mk-default-k8s-diff-port-150897: {Iface:virbr2 ExpiryTime:2025-01-27 04:06:08 +0000 UTC Type:0 Mac:52:54:00:06:f2:51 Iaid: IPaddr:192.168.50.57 Prefix:24 Hostname:default-k8s-diff-port-150897 Clientid:01:52:54:00:06:f2:51}
	I0127 03:11:25.290107  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has defined IP address 192.168.50.57 and MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:11:25.290619  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:f2:51", ip: ""} in network mk-default-k8s-diff-port-150897: {Iface:virbr2 ExpiryTime:2025-01-27 04:06:08 +0000 UTC Type:0 Mac:52:54:00:06:f2:51 Iaid: IPaddr:192.168.50.57 Prefix:24 Hostname:default-k8s-diff-port-150897 Clientid:01:52:54:00:06:f2:51}
	I0127 03:11:25.290644  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHPort
	I0127 03:11:25.290650  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has defined IP address 192.168.50.57 and MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:11:25.290840  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHPort
	I0127 03:11:25.290843  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHKeyPath
	I0127 03:11:25.291031  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHKeyPath
	I0127 03:11:25.291074  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHUsername
	I0127 03:11:25.291138  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has defined MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:11:25.291207  951018 sshutil.go:53] new ssh client: &{IP:192.168.50.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/default-k8s-diff-port-150897/id_rsa Username:docker}
	I0127 03:11:25.291302  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHUsername
	I0127 03:11:25.291462  951018 sshutil.go:53] new ssh client: &{IP:192.168.50.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/default-k8s-diff-port-150897/id_rsa Username:docker}
	I0127 03:11:25.291637  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:f2:51", ip: ""} in network mk-default-k8s-diff-port-150897: {Iface:virbr2 ExpiryTime:2025-01-27 04:06:08 +0000 UTC Type:0 Mac:52:54:00:06:f2:51 Iaid: IPaddr:192.168.50.57 Prefix:24 Hostname:default-k8s-diff-port-150897 Clientid:01:52:54:00:06:f2:51}
	I0127 03:11:25.291660  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has defined IP address 192.168.50.57 and MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:11:25.291868  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHPort
	I0127 03:11:25.292066  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHKeyPath
	I0127 03:11:25.292223  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHUsername
	I0127 03:11:25.292355  951018 sshutil.go:53] new ssh client: &{IP:192.168.50.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/default-k8s-diff-port-150897/id_rsa Username:docker}
	I0127 03:11:25.302667  951018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42313
	I0127 03:11:25.303234  951018 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:11:25.303858  951018 main.go:141] libmachine: Using API Version  1
	I0127 03:11:25.303884  951018 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:11:25.304223  951018 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:11:25.304425  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetState
	I0127 03:11:25.306078  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .DriverName
	I0127 03:11:25.306326  951018 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 03:11:25.306341  951018 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 03:11:25.306356  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHHostname
	I0127 03:11:25.309062  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has defined MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:11:25.309367  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:f2:51", ip: ""} in network mk-default-k8s-diff-port-150897: {Iface:virbr2 ExpiryTime:2025-01-27 04:06:08 +0000 UTC Type:0 Mac:52:54:00:06:f2:51 Iaid: IPaddr:192.168.50.57 Prefix:24 Hostname:default-k8s-diff-port-150897 Clientid:01:52:54:00:06:f2:51}
	I0127 03:11:25.309397  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | domain default-k8s-diff-port-150897 has defined IP address 192.168.50.57 and MAC address 52:54:00:06:f2:51 in network mk-default-k8s-diff-port-150897
	I0127 03:11:25.309570  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHPort
	I0127 03:11:25.309740  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHKeyPath
	I0127 03:11:25.309907  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .GetSSHUsername
	I0127 03:11:25.310051  951018 sshutil.go:53] new ssh client: &{IP:192.168.50.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/default-k8s-diff-port-150897/id_rsa Username:docker}
	I0127 03:11:25.531873  951018 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 03:11:25.556303  951018 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-150897" to be "Ready" ...
	I0127 03:11:25.594099  951018 node_ready.go:49] node "default-k8s-diff-port-150897" has status "Ready":"True"
	I0127 03:11:25.594128  951018 node_ready.go:38] duration metric: took 37.780999ms for node "default-k8s-diff-port-150897" to be "Ready" ...
	I0127 03:11:25.594142  951018 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:11:25.605677  951018 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-bhmkn" in "kube-system" namespace to be "Ready" ...
	I0127 03:11:25.640621  951018 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 03:11:25.640646  951018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 03:11:25.661625  951018 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 03:11:25.697647  951018 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 03:11:25.697970  951018 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 03:11:25.697996  951018 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 03:11:25.791675  951018 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 03:11:25.791711  951018 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 03:11:25.794282  951018 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 03:11:25.794305  951018 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 03:11:25.886368  951018 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 03:11:25.886393  951018 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 03:11:25.905089  951018 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 03:11:25.962228  951018 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 03:11:25.962260  951018 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 03:11:26.042508  951018 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 03:11:26.042543  951018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 03:11:26.175110  951018 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 03:11:26.175147  951018 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 03:11:26.285917  951018 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 03:11:26.285944  951018 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 03:11:26.337054  951018 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 03:11:26.337094  951018 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 03:11:26.398947  951018 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 03:11:26.398972  951018 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 03:11:26.463210  951018 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 03:11:26.463252  951018 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 03:11:26.517633  951018 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 03:11:26.602310  951018 main.go:141] libmachine: Making call to close driver server
	I0127 03:11:26.602342  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .Close
	I0127 03:11:26.602411  951018 main.go:141] libmachine: Making call to close driver server
	I0127 03:11:26.602440  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .Close
	I0127 03:11:26.602715  951018 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:11:26.602766  951018 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:11:26.602776  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | Closing plugin on server side
	I0127 03:11:26.602797  951018 main.go:141] libmachine: Making call to close driver server
	I0127 03:11:26.602868  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .Close
	I0127 03:11:26.602740  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | Closing plugin on server side
	I0127 03:11:26.602834  951018 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:11:26.603142  951018 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:11:26.603155  951018 main.go:141] libmachine: Making call to close driver server
	I0127 03:11:26.603164  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .Close
	I0127 03:11:26.603237  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | Closing plugin on server side
	I0127 03:11:26.603278  951018 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:11:26.603293  951018 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:11:26.603385  951018 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:11:26.603406  951018 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:11:26.615184  951018 main.go:141] libmachine: Making call to close driver server
	I0127 03:11:26.615211  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .Close
	I0127 03:11:26.615487  951018 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:11:26.615531  951018 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:11:26.615546  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | Closing plugin on server side
	I0127 03:11:27.188028  951018 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.28289676s)
	I0127 03:11:27.188104  951018 main.go:141] libmachine: Making call to close driver server
	I0127 03:11:27.188121  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .Close
	I0127 03:11:27.188507  951018 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:11:27.188530  951018 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:11:27.188556  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) DBG | Closing plugin on server side
	I0127 03:11:27.188600  951018 main.go:141] libmachine: Making call to close driver server
	I0127 03:11:27.188618  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .Close
	I0127 03:11:27.188896  951018 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:11:27.188934  951018 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:11:27.188948  951018 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-150897"
	I0127 03:11:27.619951  951018 pod_ready.go:103] pod "coredns-668d6bf9bc-bhmkn" in "kube-system" namespace has status "Ready":"False"
	I0127 03:11:28.158322  951018 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.640633977s)
	I0127 03:11:28.158375  951018 main.go:141] libmachine: Making call to close driver server
	I0127 03:11:28.158391  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .Close
	I0127 03:11:28.158783  951018 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:11:28.158801  951018 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:11:28.158812  951018 main.go:141] libmachine: Making call to close driver server
	I0127 03:11:28.158819  951018 main.go:141] libmachine: (default-k8s-diff-port-150897) Calling .Close
	I0127 03:11:28.159141  951018 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:11:28.159163  951018 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:11:28.160741  951018 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-150897 addons enable metrics-server
	
	I0127 03:11:28.162031  951018 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 03:11:28.163295  951018 addons.go:514] duration metric: took 2.930673623s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 03:11:30.111938  951018 pod_ready.go:103] pod "coredns-668d6bf9bc-bhmkn" in "kube-system" namespace has status "Ready":"False"
	I0127 03:11:32.112194  951018 pod_ready.go:103] pod "coredns-668d6bf9bc-bhmkn" in "kube-system" namespace has status "Ready":"False"
	I0127 03:11:34.113243  951018 pod_ready.go:103] pod "coredns-668d6bf9bc-bhmkn" in "kube-system" namespace has status "Ready":"False"
	I0127 03:11:35.122230  951018 pod_ready.go:93] pod "coredns-668d6bf9bc-bhmkn" in "kube-system" namespace has status "Ready":"True"
	I0127 03:11:35.122259  951018 pod_ready.go:82] duration metric: took 9.516548734s for pod "coredns-668d6bf9bc-bhmkn" in "kube-system" namespace to be "Ready" ...
	I0127 03:11:35.122272  951018 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-w79lb" in "kube-system" namespace to be "Ready" ...
	I0127 03:11:35.630636  951018 pod_ready.go:93] pod "coredns-668d6bf9bc-w79lb" in "kube-system" namespace has status "Ready":"True"
	I0127 03:11:35.630662  951018 pod_ready.go:82] duration metric: took 508.382596ms for pod "coredns-668d6bf9bc-w79lb" in "kube-system" namespace to be "Ready" ...
	I0127 03:11:35.630672  951018 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-150897" in "kube-system" namespace to be "Ready" ...
	I0127 03:11:35.637765  951018 pod_ready.go:93] pod "etcd-default-k8s-diff-port-150897" in "kube-system" namespace has status "Ready":"True"
	I0127 03:11:35.637792  951018 pod_ready.go:82] duration metric: took 7.112165ms for pod "etcd-default-k8s-diff-port-150897" in "kube-system" namespace to be "Ready" ...
	I0127 03:11:35.637808  951018 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-150897" in "kube-system" namespace to be "Ready" ...
	I0127 03:11:35.643737  951018 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-150897" in "kube-system" namespace has status "Ready":"True"
	I0127 03:11:35.643765  951018 pod_ready.go:82] duration metric: took 5.947406ms for pod "kube-apiserver-default-k8s-diff-port-150897" in "kube-system" namespace to be "Ready" ...
	I0127 03:11:35.643779  951018 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-150897" in "kube-system" namespace to be "Ready" ...
	I0127 03:11:35.648811  951018 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-150897" in "kube-system" namespace has status "Ready":"True"
	I0127 03:11:35.648838  951018 pod_ready.go:82] duration metric: took 5.047265ms for pod "kube-controller-manager-default-k8s-diff-port-150897" in "kube-system" namespace to be "Ready" ...
	I0127 03:11:35.648853  951018 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-br56d" in "kube-system" namespace to be "Ready" ...
	I0127 03:11:35.909278  951018 pod_ready.go:93] pod "kube-proxy-br56d" in "kube-system" namespace has status "Ready":"True"
	I0127 03:11:35.909313  951018 pod_ready.go:82] duration metric: took 260.44769ms for pod "kube-proxy-br56d" in "kube-system" namespace to be "Ready" ...
	I0127 03:11:35.909328  951018 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-150897" in "kube-system" namespace to be "Ready" ...
	I0127 03:11:36.310599  951018 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-150897" in "kube-system" namespace has status "Ready":"True"
	I0127 03:11:36.310624  951018 pod_ready.go:82] duration metric: took 401.28831ms for pod "kube-scheduler-default-k8s-diff-port-150897" in "kube-system" namespace to be "Ready" ...
	I0127 03:11:36.310633  951018 pod_ready.go:39] duration metric: took 10.71647752s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:11:36.310651  951018 api_server.go:52] waiting for apiserver process to appear ...
	I0127 03:11:36.310703  951018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:11:36.327305  951018 api_server.go:72] duration metric: took 11.094737747s to wait for apiserver process to appear ...
	I0127 03:11:36.327340  951018 api_server.go:88] waiting for apiserver healthz status ...
	I0127 03:11:36.327368  951018 api_server.go:253] Checking apiserver healthz at https://192.168.50.57:8444/healthz ...
	I0127 03:11:36.332865  951018 api_server.go:279] https://192.168.50.57:8444/healthz returned 200:
	ok
	I0127 03:11:36.333956  951018 api_server.go:141] control plane version: v1.32.1
	I0127 03:11:36.333979  951018 api_server.go:131] duration metric: took 6.631156ms to wait for apiserver health ...
	I0127 03:11:36.333988  951018 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 03:11:36.512831  951018 system_pods.go:59] 9 kube-system pods found
	I0127 03:11:36.512860  951018 system_pods.go:61] "coredns-668d6bf9bc-bhmkn" [e3df7494-99db-492d-852f-b3019a4a5f59] Running
	I0127 03:11:36.512867  951018 system_pods.go:61] "coredns-668d6bf9bc-w79lb" [84895192-c7b3-4307-92ce-e5a874d1c151] Running
	I0127 03:11:36.512872  951018 system_pods.go:61] "etcd-default-k8s-diff-port-150897" [7c02ce3d-50a4-4104-9fd7-0c1b9e0ff227] Running
	I0127 03:11:36.512878  951018 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-150897" [845be567-6bda-4e34-9789-163bdb053488] Running
	I0127 03:11:36.512883  951018 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-150897" [317ec9f5-ad08-4ee6-9990-4193caaf124d] Running
	I0127 03:11:36.512888  951018 system_pods.go:61] "kube-proxy-br56d" [1de9065a-ccfd-497e-859b-f6a6de73e192] Running
	I0127 03:11:36.512893  951018 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-150897" [0150171a-125a-47ab-9680-d271b669ce4e] Running
	I0127 03:11:36.512902  951018 system_pods.go:61] "metrics-server-f79f97bbb-t88bf" [a5199359-da1d-44dc-acbc-54f288f148ce] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 03:11:36.512911  951018 system_pods.go:61] "storage-provisioner" [c1d711a0-b203-4f87-9b3e-42cc4ec4c4e9] Running
	I0127 03:11:36.512933  951018 system_pods.go:74] duration metric: took 178.926334ms to wait for pod list to return data ...
	I0127 03:11:36.512945  951018 default_sa.go:34] waiting for default service account to be created ...
	I0127 03:11:36.710013  951018 default_sa.go:45] found service account: "default"
	I0127 03:11:36.710048  951018 default_sa.go:55] duration metric: took 197.094014ms for default service account to be created ...
	I0127 03:11:36.710061  951018 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 03:11:36.912870  951018 system_pods.go:87] 9 kube-system pods found

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p default-k8s-diff-port-150897 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1": signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-150897 -n default-k8s-diff-port-150897
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-150897 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-150897 logs -n 25: (1.302091672s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-284111 sudo                                | bridge-284111          | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | systemctl cat kubelet                                |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo                                | bridge-284111          | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | journalctl -xeu kubelet --all                        |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo cat                            | bridge-284111          | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                        |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo cat                            | bridge-284111          | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                        |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo                                | bridge-284111          | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC |                     |
	|         | systemctl status docker --all                        |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo                                | bridge-284111          | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | systemctl cat docker                                 |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo cat                            | bridge-284111          | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | /etc/docker/daemon.json                              |                        |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo docker                         | bridge-284111          | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC |                     |
	|         | system info                                          |                        |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo                                | bridge-284111          | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC |                     |
	|         | systemctl status cri-docker                          |                        |         |         |                     |                     |
	|         | --all --full --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo                                | bridge-284111          | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | systemctl cat cri-docker                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo cat                            | bridge-284111          | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                        |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo cat                            | bridge-284111          | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                        |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo                                | bridge-284111          | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | cri-dockerd --version                                |                        |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo                                | bridge-284111          | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC |                     |
	|         | systemctl status containerd                          |                        |         |         |                     |                     |
	|         | --all --full --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo                                | bridge-284111          | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | systemctl cat containerd                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo cat                            | bridge-284111          | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | /lib/systemd/system/containerd.service               |                        |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo cat                            | bridge-284111          | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | /etc/containerd/config.toml                          |                        |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo                                | bridge-284111          | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | containerd config dump                               |                        |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo                                | bridge-284111          | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | systemctl status crio --all                          |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo                                | bridge-284111          | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | systemctl cat crio --no-pager                        |                        |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo find                           | bridge-284111          | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                        |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo crio                           | bridge-284111          | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | config                                               |                        |         |         |                     |                     |
	| delete  | -p bridge-284111                                     | bridge-284111          | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	| delete  | -p old-k8s-version-542356                            | old-k8s-version-542356 | jenkins | v1.35.0 | 27 Jan 25 03:23 UTC | 27 Jan 25 03:23 UTC |
	| delete  | -p no-preload-844432                                 | no-preload-844432      | jenkins | v1.35.0 | 27 Jan 25 03:23 UTC | 27 Jan 25 03:23 UTC |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 03:17:58
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 03:17:58.007832  965412 out.go:345] Setting OutFile to fd 1 ...
	I0127 03:17:58.008087  965412 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 03:17:58.008098  965412 out.go:358] Setting ErrFile to fd 2...
	I0127 03:17:58.008102  965412 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 03:17:58.008278  965412 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-897624/.minikube/bin
	I0127 03:17:58.008983  965412 out.go:352] Setting JSON to false
	I0127 03:17:58.010228  965412 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":14421,"bootTime":1737933457,"procs":304,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 03:17:58.010344  965412 start.go:139] virtualization: kvm guest
	I0127 03:17:58.012718  965412 out.go:177] * [bridge-284111] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 03:17:58.014083  965412 notify.go:220] Checking for updates...
	I0127 03:17:58.014104  965412 out.go:177]   - MINIKUBE_LOCATION=20316
	I0127 03:17:58.015451  965412 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 03:17:58.016768  965412 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20316-897624/kubeconfig
	I0127 03:17:58.017965  965412 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-897624/.minikube
	I0127 03:17:58.019014  965412 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 03:17:58.020110  965412 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 03:17:58.021921  965412 config.go:182] Loaded profile config "default-k8s-diff-port-150897": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 03:17:58.022085  965412 config.go:182] Loaded profile config "no-preload-844432": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 03:17:58.022217  965412 config.go:182] Loaded profile config "old-k8s-version-542356": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 03:17:58.022360  965412 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 03:17:58.061018  965412 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 03:17:58.062340  965412 start.go:297] selected driver: kvm2
	I0127 03:17:58.062361  965412 start.go:901] validating driver "kvm2" against <nil>
	I0127 03:17:58.062373  965412 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 03:17:58.063151  965412 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 03:17:58.063269  965412 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20316-897624/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 03:17:58.080150  965412 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 03:17:58.080207  965412 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 03:17:58.080475  965412 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 03:17:58.080515  965412 cni.go:84] Creating CNI manager for "bridge"
	I0127 03:17:58.080523  965412 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 03:17:58.080596  965412 start.go:340] cluster config:
	{Name:bridge-284111 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-284111 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I0127 03:17:58.080703  965412 iso.go:125] acquiring lock: {Name:mkbbe08fa3daa2372045069667a0a52e0e34abd9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 03:17:58.082659  965412 out.go:177] * Starting "bridge-284111" primary control-plane node in "bridge-284111" cluster
	I0127 03:17:58.084060  965412 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 03:17:58.084155  965412 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 03:17:58.084193  965412 cache.go:56] Caching tarball of preloaded images
	I0127 03:17:58.084317  965412 preload.go:172] Found /home/jenkins/minikube-integration/20316-897624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 03:17:58.084333  965412 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 03:17:58.084446  965412 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/config.json ...
	I0127 03:17:58.084473  965412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/config.json: {Name:mk925500efef5bfd6040ea4d63f14dacaa6ac946 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:17:58.084633  965412 start.go:360] acquireMachinesLock for bridge-284111: {Name:mke71211974c740845fb9b00041df14f4e3ecd74 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 03:17:58.084676  965412 start.go:364] duration metric: took 26.584µs to acquireMachinesLock for "bridge-284111"
	I0127 03:17:58.084703  965412 start.go:93] Provisioning new machine with config: &{Name:bridge-284111 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-284111 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 03:17:58.084799  965412 start.go:125] createHost starting for "" (driver="kvm2")
	I0127 03:17:58.086526  965412 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0127 03:17:58.086710  965412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:17:58.086766  965412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:17:58.103582  965412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34573
	I0127 03:17:58.104096  965412 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:17:58.104674  965412 main.go:141] libmachine: Using API Version  1
	I0127 03:17:58.104697  965412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:17:58.105051  965412 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:17:58.105275  965412 main.go:141] libmachine: (bridge-284111) Calling .GetMachineName
	I0127 03:17:58.105440  965412 main.go:141] libmachine: (bridge-284111) Calling .DriverName
	I0127 03:17:58.105583  965412 start.go:159] libmachine.API.Create for "bridge-284111" (driver="kvm2")
	I0127 03:17:58.105618  965412 client.go:168] LocalClient.Create starting
	I0127 03:17:58.105657  965412 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem
	I0127 03:17:58.105689  965412 main.go:141] libmachine: Decoding PEM data...
	I0127 03:17:58.105706  965412 main.go:141] libmachine: Parsing certificate...
	I0127 03:17:58.105761  965412 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20316-897624/.minikube/certs/cert.pem
	I0127 03:17:58.105784  965412 main.go:141] libmachine: Decoding PEM data...
	I0127 03:17:58.105804  965412 main.go:141] libmachine: Parsing certificate...
	I0127 03:17:58.105828  965412 main.go:141] libmachine: Running pre-create checks...
	I0127 03:17:58.105836  965412 main.go:141] libmachine: (bridge-284111) Calling .PreCreateCheck
	I0127 03:17:58.106286  965412 main.go:141] libmachine: (bridge-284111) Calling .GetConfigRaw
	I0127 03:17:58.106758  965412 main.go:141] libmachine: Creating machine...
	I0127 03:17:58.106773  965412 main.go:141] libmachine: (bridge-284111) Calling .Create
	I0127 03:17:58.106921  965412 main.go:141] libmachine: (bridge-284111) creating KVM machine...
	I0127 03:17:58.106938  965412 main.go:141] libmachine: (bridge-284111) creating network...
	I0127 03:17:58.108340  965412 main.go:141] libmachine: (bridge-284111) DBG | found existing default KVM network
	I0127 03:17:58.109981  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:17:58.109804  965435 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:9e:80:59} reservation:<nil>}
	I0127 03:17:58.111324  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:17:58.111241  965435 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:27:c5:54} reservation:<nil>}
	I0127 03:17:58.112864  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:17:58.112772  965435 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000386960}
	I0127 03:17:58.112921  965412 main.go:141] libmachine: (bridge-284111) DBG | created network xml: 
	I0127 03:17:58.112965  965412 main.go:141] libmachine: (bridge-284111) DBG | <network>
	I0127 03:17:58.112982  965412 main.go:141] libmachine: (bridge-284111) DBG |   <name>mk-bridge-284111</name>
	I0127 03:17:58.112994  965412 main.go:141] libmachine: (bridge-284111) DBG |   <dns enable='no'/>
	I0127 03:17:58.113003  965412 main.go:141] libmachine: (bridge-284111) DBG |   
	I0127 03:17:58.113012  965412 main.go:141] libmachine: (bridge-284111) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0127 03:17:58.113026  965412 main.go:141] libmachine: (bridge-284111) DBG |     <dhcp>
	I0127 03:17:58.113039  965412 main.go:141] libmachine: (bridge-284111) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0127 03:17:58.113049  965412 main.go:141] libmachine: (bridge-284111) DBG |     </dhcp>
	I0127 03:17:58.113065  965412 main.go:141] libmachine: (bridge-284111) DBG |   </ip>
	I0127 03:17:58.113087  965412 main.go:141] libmachine: (bridge-284111) DBG |   
	I0127 03:17:58.113098  965412 main.go:141] libmachine: (bridge-284111) DBG | </network>
	I0127 03:17:58.113108  965412 main.go:141] libmachine: (bridge-284111) DBG | 
	I0127 03:17:58.118866  965412 main.go:141] libmachine: (bridge-284111) DBG | trying to create private KVM network mk-bridge-284111 192.168.61.0/24...
	I0127 03:17:58.193944  965412 main.go:141] libmachine: (bridge-284111) DBG | private KVM network mk-bridge-284111 192.168.61.0/24 created
	I0127 03:17:58.194004  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:17:58.193927  965435 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20316-897624/.minikube
	I0127 03:17:58.194017  965412 main.go:141] libmachine: (bridge-284111) setting up store path in /home/jenkins/minikube-integration/20316-897624/.minikube/machines/bridge-284111 ...
	I0127 03:17:58.194041  965412 main.go:141] libmachine: (bridge-284111) building disk image from file:///home/jenkins/minikube-integration/20316-897624/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 03:17:58.194060  965412 main.go:141] libmachine: (bridge-284111) Downloading /home/jenkins/minikube-integration/20316-897624/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20316-897624/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0127 03:17:58.491014  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:17:58.490850  965435 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/bridge-284111/id_rsa...
	I0127 03:17:58.742092  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:17:58.741934  965435 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/bridge-284111/bridge-284111.rawdisk...
	I0127 03:17:58.742129  965412 main.go:141] libmachine: (bridge-284111) DBG | Writing magic tar header
	I0127 03:17:58.742144  965412 main.go:141] libmachine: (bridge-284111) DBG | Writing SSH key tar header
	I0127 03:17:58.742157  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:17:58.742067  965435 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20316-897624/.minikube/machines/bridge-284111 ...
	I0127 03:17:58.742170  965412 main.go:141] libmachine: (bridge-284111) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/bridge-284111
	I0127 03:17:58.742179  965412 main.go:141] libmachine: (bridge-284111) setting executable bit set on /home/jenkins/minikube-integration/20316-897624/.minikube/machines/bridge-284111 (perms=drwx------)
	I0127 03:17:58.742193  965412 main.go:141] libmachine: (bridge-284111) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20316-897624/.minikube/machines
	I0127 03:17:58.742211  965412 main.go:141] libmachine: (bridge-284111) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20316-897624/.minikube
	I0127 03:17:58.742226  965412 main.go:141] libmachine: (bridge-284111) setting executable bit set on /home/jenkins/minikube-integration/20316-897624/.minikube/machines (perms=drwxr-xr-x)
	I0127 03:17:58.742240  965412 main.go:141] libmachine: (bridge-284111) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20316-897624
	I0127 03:17:58.742254  965412 main.go:141] libmachine: (bridge-284111) setting executable bit set on /home/jenkins/minikube-integration/20316-897624/.minikube (perms=drwxr-xr-x)
	I0127 03:17:58.742267  965412 main.go:141] libmachine: (bridge-284111) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0127 03:17:58.742281  965412 main.go:141] libmachine: (bridge-284111) DBG | checking permissions on dir: /home/jenkins
	I0127 03:17:58.742293  965412 main.go:141] libmachine: (bridge-284111) DBG | checking permissions on dir: /home
	I0127 03:17:58.742307  965412 main.go:141] libmachine: (bridge-284111) setting executable bit set on /home/jenkins/minikube-integration/20316-897624 (perms=drwxrwxr-x)
	I0127 03:17:58.742319  965412 main.go:141] libmachine: (bridge-284111) DBG | skipping /home - not owner
	I0127 03:17:58.742332  965412 main.go:141] libmachine: (bridge-284111) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0127 03:17:58.742346  965412 main.go:141] libmachine: (bridge-284111) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0127 03:17:58.742355  965412 main.go:141] libmachine: (bridge-284111) creating domain...
	I0127 03:17:58.743737  965412 main.go:141] libmachine: (bridge-284111) define libvirt domain using xml: 
	I0127 03:17:58.743768  965412 main.go:141] libmachine: (bridge-284111) <domain type='kvm'>
	I0127 03:17:58.743795  965412 main.go:141] libmachine: (bridge-284111)   <name>bridge-284111</name>
	I0127 03:17:58.743805  965412 main.go:141] libmachine: (bridge-284111)   <memory unit='MiB'>3072</memory>
	I0127 03:17:58.743811  965412 main.go:141] libmachine: (bridge-284111)   <vcpu>2</vcpu>
	I0127 03:17:58.743818  965412 main.go:141] libmachine: (bridge-284111)   <features>
	I0127 03:17:58.743824  965412 main.go:141] libmachine: (bridge-284111)     <acpi/>
	I0127 03:17:58.743831  965412 main.go:141] libmachine: (bridge-284111)     <apic/>
	I0127 03:17:58.743836  965412 main.go:141] libmachine: (bridge-284111)     <pae/>
	I0127 03:17:58.743843  965412 main.go:141] libmachine: (bridge-284111)     
	I0127 03:17:58.743860  965412 main.go:141] libmachine: (bridge-284111)   </features>
	I0127 03:17:58.743868  965412 main.go:141] libmachine: (bridge-284111)   <cpu mode='host-passthrough'>
	I0127 03:17:58.743872  965412 main.go:141] libmachine: (bridge-284111)   
	I0127 03:17:58.743877  965412 main.go:141] libmachine: (bridge-284111)   </cpu>
	I0127 03:17:58.743916  965412 main.go:141] libmachine: (bridge-284111)   <os>
	I0127 03:17:58.743943  965412 main.go:141] libmachine: (bridge-284111)     <type>hvm</type>
	I0127 03:17:58.743960  965412 main.go:141] libmachine: (bridge-284111)     <boot dev='cdrom'/>
	I0127 03:17:58.743978  965412 main.go:141] libmachine: (bridge-284111)     <boot dev='hd'/>
	I0127 03:17:58.743991  965412 main.go:141] libmachine: (bridge-284111)     <bootmenu enable='no'/>
	I0127 03:17:58.744000  965412 main.go:141] libmachine: (bridge-284111)   </os>
	I0127 03:17:58.744011  965412 main.go:141] libmachine: (bridge-284111)   <devices>
	I0127 03:17:58.744022  965412 main.go:141] libmachine: (bridge-284111)     <disk type='file' device='cdrom'>
	I0127 03:17:58.744037  965412 main.go:141] libmachine: (bridge-284111)       <source file='/home/jenkins/minikube-integration/20316-897624/.minikube/machines/bridge-284111/boot2docker.iso'/>
	I0127 03:17:58.744049  965412 main.go:141] libmachine: (bridge-284111)       <target dev='hdc' bus='scsi'/>
	I0127 03:17:58.744056  965412 main.go:141] libmachine: (bridge-284111)       <readonly/>
	I0127 03:17:58.744068  965412 main.go:141] libmachine: (bridge-284111)     </disk>
	I0127 03:17:58.744079  965412 main.go:141] libmachine: (bridge-284111)     <disk type='file' device='disk'>
	I0127 03:17:58.744092  965412 main.go:141] libmachine: (bridge-284111)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0127 03:17:58.744106  965412 main.go:141] libmachine: (bridge-284111)       <source file='/home/jenkins/minikube-integration/20316-897624/.minikube/machines/bridge-284111/bridge-284111.rawdisk'/>
	I0127 03:17:58.744119  965412 main.go:141] libmachine: (bridge-284111)       <target dev='hda' bus='virtio'/>
	I0127 03:17:58.744129  965412 main.go:141] libmachine: (bridge-284111)     </disk>
	I0127 03:17:58.744147  965412 main.go:141] libmachine: (bridge-284111)     <interface type='network'>
	I0127 03:17:58.744166  965412 main.go:141] libmachine: (bridge-284111)       <source network='mk-bridge-284111'/>
	I0127 03:17:58.744177  965412 main.go:141] libmachine: (bridge-284111)       <model type='virtio'/>
	I0127 03:17:58.744181  965412 main.go:141] libmachine: (bridge-284111)     </interface>
	I0127 03:17:58.744188  965412 main.go:141] libmachine: (bridge-284111)     <interface type='network'>
	I0127 03:17:58.744199  965412 main.go:141] libmachine: (bridge-284111)       <source network='default'/>
	I0127 03:17:58.744209  965412 main.go:141] libmachine: (bridge-284111)       <model type='virtio'/>
	I0127 03:17:58.744220  965412 main.go:141] libmachine: (bridge-284111)     </interface>
	I0127 03:17:58.744237  965412 main.go:141] libmachine: (bridge-284111)     <serial type='pty'>
	I0127 03:17:58.744254  965412 main.go:141] libmachine: (bridge-284111)       <target port='0'/>
	I0127 03:17:58.744267  965412 main.go:141] libmachine: (bridge-284111)     </serial>
	I0127 03:17:58.744277  965412 main.go:141] libmachine: (bridge-284111)     <console type='pty'>
	I0127 03:17:58.744286  965412 main.go:141] libmachine: (bridge-284111)       <target type='serial' port='0'/>
	I0127 03:17:58.744295  965412 main.go:141] libmachine: (bridge-284111)     </console>
	I0127 03:17:58.744304  965412 main.go:141] libmachine: (bridge-284111)     <rng model='virtio'>
	I0127 03:17:58.744320  965412 main.go:141] libmachine: (bridge-284111)       <backend model='random'>/dev/random</backend>
	I0127 03:17:58.744330  965412 main.go:141] libmachine: (bridge-284111)     </rng>
	I0127 03:17:58.744339  965412 main.go:141] libmachine: (bridge-284111)     
	I0127 03:17:58.744352  965412 main.go:141] libmachine: (bridge-284111)     
	I0127 03:17:58.744383  965412 main.go:141] libmachine: (bridge-284111)   </devices>
	I0127 03:17:58.744399  965412 main.go:141] libmachine: (bridge-284111) </domain>
	I0127 03:17:58.744433  965412 main.go:141] libmachine: (bridge-284111) 
	I0127 03:17:58.748565  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b5:a5:4c in network default
	I0127 03:17:58.749275  965412 main.go:141] libmachine: (bridge-284111) starting domain...
	I0127 03:17:58.749295  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:17:58.749303  965412 main.go:141] libmachine: (bridge-284111) ensuring networks are active...
	I0127 03:17:58.750055  965412 main.go:141] libmachine: (bridge-284111) Ensuring network default is active
	I0127 03:17:58.750412  965412 main.go:141] libmachine: (bridge-284111) Ensuring network mk-bridge-284111 is active
	I0127 03:17:58.750915  965412 main.go:141] libmachine: (bridge-284111) getting domain XML...
	I0127 03:17:58.751662  965412 main.go:141] libmachine: (bridge-284111) creating domain...
	I0127 03:18:00.015025  965412 main.go:141] libmachine: (bridge-284111) waiting for IP...
	I0127 03:18:00.016519  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:00.017082  965412 main.go:141] libmachine: (bridge-284111) DBG | unable to find current IP address of domain bridge-284111 in network mk-bridge-284111
	I0127 03:18:00.017146  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:18:00.017069  965435 retry.go:31] will retry after 296.46937ms: waiting for domain to come up
	I0127 03:18:00.315605  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:00.316275  965412 main.go:141] libmachine: (bridge-284111) DBG | unable to find current IP address of domain bridge-284111 in network mk-bridge-284111
	I0127 03:18:00.316335  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:18:00.316255  965435 retry.go:31] will retry after 324.587633ms: waiting for domain to come up
	I0127 03:18:00.642896  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:00.643504  965412 main.go:141] libmachine: (bridge-284111) DBG | unable to find current IP address of domain bridge-284111 in network mk-bridge-284111
	I0127 03:18:00.643533  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:18:00.643463  965435 retry.go:31] will retry after 310.207491ms: waiting for domain to come up
	I0127 03:18:00.955258  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:00.955855  965412 main.go:141] libmachine: (bridge-284111) DBG | unable to find current IP address of domain bridge-284111 in network mk-bridge-284111
	I0127 03:18:00.955900  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:18:00.955817  965435 retry.go:31] will retry after 446.485588ms: waiting for domain to come up
	I0127 03:18:01.403690  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:01.404190  965412 main.go:141] libmachine: (bridge-284111) DBG | unable to find current IP address of domain bridge-284111 in network mk-bridge-284111
	I0127 03:18:01.404213  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:18:01.404170  965435 retry.go:31] will retry after 582.778524ms: waiting for domain to come up
	I0127 03:18:01.988986  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:01.989525  965412 main.go:141] libmachine: (bridge-284111) DBG | unable to find current IP address of domain bridge-284111 in network mk-bridge-284111
	I0127 03:18:01.989575  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:18:01.989493  965435 retry.go:31] will retry after 794.193078ms: waiting for domain to come up
	I0127 03:18:02.784888  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:02.785367  965412 main.go:141] libmachine: (bridge-284111) DBG | unable to find current IP address of domain bridge-284111 in network mk-bridge-284111
	I0127 03:18:02.785398  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:18:02.785331  965435 retry.go:31] will retry after 750.185481ms: waiting for domain to come up
	I0127 03:18:03.536841  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:03.537466  965412 main.go:141] libmachine: (bridge-284111) DBG | unable to find current IP address of domain bridge-284111 in network mk-bridge-284111
	I0127 03:18:03.537489  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:18:03.537438  965435 retry.go:31] will retry after 1.167158008s: waiting for domain to come up
	I0127 03:18:04.706731  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:04.707283  965412 main.go:141] libmachine: (bridge-284111) DBG | unable to find current IP address of domain bridge-284111 in network mk-bridge-284111
	I0127 03:18:04.707309  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:18:04.707258  965435 retry.go:31] will retry after 1.775191002s: waiting for domain to come up
	I0127 03:18:06.485130  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:06.485646  965412 main.go:141] libmachine: (bridge-284111) DBG | unable to find current IP address of domain bridge-284111 in network mk-bridge-284111
	I0127 03:18:06.485667  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:18:06.485615  965435 retry.go:31] will retry after 1.448139158s: waiting for domain to come up
	I0127 03:18:07.935272  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:07.935916  965412 main.go:141] libmachine: (bridge-284111) DBG | unable to find current IP address of domain bridge-284111 in network mk-bridge-284111
	I0127 03:18:07.935951  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:18:07.935874  965435 retry.go:31] will retry after 1.937800559s: waiting for domain to come up
	I0127 03:18:09.876527  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:09.877179  965412 main.go:141] libmachine: (bridge-284111) DBG | unable to find current IP address of domain bridge-284111 in network mk-bridge-284111
	I0127 03:18:09.877209  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:18:09.877127  965435 retry.go:31] will retry after 3.510411188s: waiting for domain to come up
	I0127 03:18:13.388796  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:13.389263  965412 main.go:141] libmachine: (bridge-284111) DBG | unable to find current IP address of domain bridge-284111 in network mk-bridge-284111
	I0127 03:18:13.389312  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:18:13.389227  965435 retry.go:31] will retry after 2.812768495s: waiting for domain to come up
	I0127 03:18:16.203115  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:16.203663  965412 main.go:141] libmachine: (bridge-284111) DBG | unable to find current IP address of domain bridge-284111 in network mk-bridge-284111
	I0127 03:18:16.203687  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:18:16.203637  965435 retry.go:31] will retry after 5.220368337s: waiting for domain to come up
	I0127 03:18:21.428631  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:21.429297  965412 main.go:141] libmachine: (bridge-284111) found domain IP: 192.168.61.178
	I0127 03:18:21.429319  965412 main.go:141] libmachine: (bridge-284111) reserving static IP address...
	I0127 03:18:21.429334  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has current primary IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:21.429752  965412 main.go:141] libmachine: (bridge-284111) DBG | unable to find host DHCP lease matching {name: "bridge-284111", mac: "52:54:00:b1:5c:91", ip: "192.168.61.178"} in network mk-bridge-284111
	I0127 03:18:21.509966  965412 main.go:141] libmachine: (bridge-284111) reserved static IP address 192.168.61.178 for domain bridge-284111
	I0127 03:18:21.509994  965412 main.go:141] libmachine: (bridge-284111) waiting for SSH...
	I0127 03:18:21.510014  965412 main.go:141] libmachine: (bridge-284111) DBG | Getting to WaitForSSH function...
	I0127 03:18:21.512978  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:21.513493  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:21.513526  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:21.513707  965412 main.go:141] libmachine: (bridge-284111) DBG | Using SSH client type: external
	I0127 03:18:21.513738  965412 main.go:141] libmachine: (bridge-284111) DBG | Using SSH private key: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/bridge-284111/id_rsa (-rw-------)
	I0127 03:18:21.513787  965412 main.go:141] libmachine: (bridge-284111) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.178 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20316-897624/.minikube/machines/bridge-284111/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 03:18:21.513808  965412 main.go:141] libmachine: (bridge-284111) DBG | About to run SSH command:
	I0127 03:18:21.513827  965412 main.go:141] libmachine: (bridge-284111) DBG | exit 0
	I0127 03:18:21.644785  965412 main.go:141] libmachine: (bridge-284111) DBG | SSH cmd err, output: <nil>: 
	I0127 03:18:21.645052  965412 main.go:141] libmachine: (bridge-284111) KVM machine creation complete
	I0127 03:18:21.645355  965412 main.go:141] libmachine: (bridge-284111) Calling .GetConfigRaw
	I0127 03:18:21.645965  965412 main.go:141] libmachine: (bridge-284111) Calling .DriverName
	I0127 03:18:21.646190  965412 main.go:141] libmachine: (bridge-284111) Calling .DriverName
	I0127 03:18:21.646360  965412 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0127 03:18:21.646375  965412 main.go:141] libmachine: (bridge-284111) Calling .GetState
	I0127 03:18:21.647746  965412 main.go:141] libmachine: Detecting operating system of created instance...
	I0127 03:18:21.647759  965412 main.go:141] libmachine: Waiting for SSH to be available...
	I0127 03:18:21.647764  965412 main.go:141] libmachine: Getting to WaitForSSH function...
	I0127 03:18:21.647770  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHHostname
	I0127 03:18:21.650013  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:21.650350  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:21.650389  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:21.650556  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHPort
	I0127 03:18:21.650778  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:21.650971  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:21.651160  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHUsername
	I0127 03:18:21.651399  965412 main.go:141] libmachine: Using SSH client type: native
	I0127 03:18:21.651690  965412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0127 03:18:21.651705  965412 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0127 03:18:21.764222  965412 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 03:18:21.764246  965412 main.go:141] libmachine: Detecting the provisioner...
	I0127 03:18:21.764254  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHHostname
	I0127 03:18:21.767309  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:21.767688  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:21.767729  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:21.767918  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHPort
	I0127 03:18:21.768152  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:21.768332  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:21.768482  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHUsername
	I0127 03:18:21.768638  965412 main.go:141] libmachine: Using SSH client type: native
	I0127 03:18:21.768838  965412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0127 03:18:21.768853  965412 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0127 03:18:21.881643  965412 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0127 03:18:21.881735  965412 main.go:141] libmachine: found compatible host: buildroot
	I0127 03:18:21.881746  965412 main.go:141] libmachine: Provisioning with buildroot...
	I0127 03:18:21.881753  965412 main.go:141] libmachine: (bridge-284111) Calling .GetMachineName
	I0127 03:18:21.881975  965412 buildroot.go:166] provisioning hostname "bridge-284111"
	I0127 03:18:21.881988  965412 main.go:141] libmachine: (bridge-284111) Calling .GetMachineName
	I0127 03:18:21.882114  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHHostname
	I0127 03:18:21.885113  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:21.885480  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:21.885512  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:21.885630  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHPort
	I0127 03:18:21.885871  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:21.886021  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:21.886238  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHUsername
	I0127 03:18:21.886376  965412 main.go:141] libmachine: Using SSH client type: native
	I0127 03:18:21.886540  965412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0127 03:18:21.886551  965412 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-284111 && echo "bridge-284111" | sudo tee /etc/hostname
	I0127 03:18:22.015776  965412 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-284111
	
	I0127 03:18:22.015808  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHHostname
	I0127 03:18:22.018986  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.019331  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:22.019361  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.019548  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHPort
	I0127 03:18:22.019766  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:22.019970  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:22.020119  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHUsername
	I0127 03:18:22.020270  965412 main.go:141] libmachine: Using SSH client type: native
	I0127 03:18:22.020473  965412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0127 03:18:22.020500  965412 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-284111' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-284111/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-284111' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 03:18:22.149637  965412 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 03:18:22.149671  965412 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20316-897624/.minikube CaCertPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20316-897624/.minikube}
	I0127 03:18:22.149726  965412 buildroot.go:174] setting up certificates
	I0127 03:18:22.149746  965412 provision.go:84] configureAuth start
	I0127 03:18:22.149765  965412 main.go:141] libmachine: (bridge-284111) Calling .GetMachineName
	I0127 03:18:22.150087  965412 main.go:141] libmachine: (bridge-284111) Calling .GetIP
	I0127 03:18:22.153181  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.153482  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:22.153504  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.153707  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHHostname
	I0127 03:18:22.156418  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.156825  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:22.156858  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.157060  965412 provision.go:143] copyHostCerts
	I0127 03:18:22.157140  965412 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-897624/.minikube/ca.pem, removing ...
	I0127 03:18:22.157153  965412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-897624/.minikube/ca.pem
	I0127 03:18:22.157243  965412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20316-897624/.minikube/ca.pem (1078 bytes)
	I0127 03:18:22.157355  965412 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-897624/.minikube/cert.pem, removing ...
	I0127 03:18:22.157366  965412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-897624/.minikube/cert.pem
	I0127 03:18:22.157404  965412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20316-897624/.minikube/cert.pem (1123 bytes)
	I0127 03:18:22.157496  965412 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-897624/.minikube/key.pem, removing ...
	I0127 03:18:22.157506  965412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-897624/.minikube/key.pem
	I0127 03:18:22.157546  965412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20316-897624/.minikube/key.pem (1679 bytes)
	I0127 03:18:22.157616  965412 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca-key.pem org=jenkins.bridge-284111 san=[127.0.0.1 192.168.61.178 bridge-284111 localhost minikube]
	I0127 03:18:22.340623  965412 provision.go:177] copyRemoteCerts
	I0127 03:18:22.340707  965412 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 03:18:22.340739  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHHostname
	I0127 03:18:22.343784  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.344187  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:22.344219  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.344432  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHPort
	I0127 03:18:22.344616  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:22.344750  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHUsername
	I0127 03:18:22.344872  965412 sshutil.go:53] new ssh client: &{IP:192.168.61.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/bridge-284111/id_rsa Username:docker}
	I0127 03:18:22.435531  965412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 03:18:22.459380  965412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 03:18:22.481955  965412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0127 03:18:22.504297  965412 provision.go:87] duration metric: took 354.53072ms to configureAuth
	I0127 03:18:22.504340  965412 buildroot.go:189] setting minikube options for container-runtime
	I0127 03:18:22.504542  965412 config.go:182] Loaded profile config "bridge-284111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 03:18:22.504637  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHHostname
	I0127 03:18:22.507527  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.507981  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:22.508014  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.508272  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHPort
	I0127 03:18:22.508518  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:22.508696  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:22.508867  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHUsername
	I0127 03:18:22.509083  965412 main.go:141] libmachine: Using SSH client type: native
	I0127 03:18:22.509321  965412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0127 03:18:22.509344  965412 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 03:18:22.745255  965412 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 03:18:22.745289  965412 main.go:141] libmachine: Checking connection to Docker...
	I0127 03:18:22.745298  965412 main.go:141] libmachine: (bridge-284111) Calling .GetURL
	I0127 03:18:22.746733  965412 main.go:141] libmachine: (bridge-284111) DBG | using libvirt version 6000000
	I0127 03:18:22.748816  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.749210  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:22.749235  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.749452  965412 main.go:141] libmachine: Docker is up and running!
	I0127 03:18:22.749468  965412 main.go:141] libmachine: Reticulating splines...
	I0127 03:18:22.749477  965412 client.go:171] duration metric: took 24.643847103s to LocalClient.Create
	I0127 03:18:22.749501  965412 start.go:167] duration metric: took 24.643920715s to libmachine.API.Create "bridge-284111"
	I0127 03:18:22.749510  965412 start.go:293] postStartSetup for "bridge-284111" (driver="kvm2")
	I0127 03:18:22.749521  965412 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 03:18:22.749538  965412 main.go:141] libmachine: (bridge-284111) Calling .DriverName
	I0127 03:18:22.749766  965412 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 03:18:22.749791  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHHostname
	I0127 03:18:22.752050  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.752455  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:22.752481  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.752670  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHPort
	I0127 03:18:22.752875  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:22.753046  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHUsername
	I0127 03:18:22.753209  965412 sshutil.go:53] new ssh client: &{IP:192.168.61.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/bridge-284111/id_rsa Username:docker}
	I0127 03:18:22.838649  965412 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 03:18:22.842594  965412 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 03:18:22.842623  965412 filesync.go:126] Scanning /home/jenkins/minikube-integration/20316-897624/.minikube/addons for local assets ...
	I0127 03:18:22.842702  965412 filesync.go:126] Scanning /home/jenkins/minikube-integration/20316-897624/.minikube/files for local assets ...
	I0127 03:18:22.842811  965412 filesync.go:149] local asset: /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem -> 9048892.pem in /etc/ssl/certs
	I0127 03:18:22.842925  965412 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 03:18:22.851615  965412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem --> /etc/ssl/certs/9048892.pem (1708 bytes)
	I0127 03:18:22.873576  965412 start.go:296] duration metric: took 124.051614ms for postStartSetup
	I0127 03:18:22.873628  965412 main.go:141] libmachine: (bridge-284111) Calling .GetConfigRaw
	I0127 03:18:22.874263  965412 main.go:141] libmachine: (bridge-284111) Calling .GetIP
	I0127 03:18:22.877366  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.877690  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:22.877717  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.877984  965412 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/config.json ...
	I0127 03:18:22.878205  965412 start.go:128] duration metric: took 24.793394051s to createHost
	I0127 03:18:22.878230  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHHostname
	I0127 03:18:22.880656  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.881029  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:22.881057  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.881273  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHPort
	I0127 03:18:22.881451  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:22.881617  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:22.881735  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHUsername
	I0127 03:18:22.881878  965412 main.go:141] libmachine: Using SSH client type: native
	I0127 03:18:22.882070  965412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0127 03:18:22.882081  965412 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 03:18:22.993428  965412 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737947902.961069921
	
	I0127 03:18:22.993452  965412 fix.go:216] guest clock: 1737947902.961069921
	I0127 03:18:22.993459  965412 fix.go:229] Guest: 2025-01-27 03:18:22.961069921 +0000 UTC Remote: 2025-01-27 03:18:22.878219801 +0000 UTC m=+24.911173814 (delta=82.85012ms)
	I0127 03:18:22.993480  965412 fix.go:200] guest clock delta is within tolerance: 82.85012ms
	I0127 03:18:22.993486  965412 start.go:83] releasing machines lock for "bridge-284111", held for 24.908799324s
	I0127 03:18:22.993504  965412 main.go:141] libmachine: (bridge-284111) Calling .DriverName
	I0127 03:18:22.993771  965412 main.go:141] libmachine: (bridge-284111) Calling .GetIP
	I0127 03:18:22.996377  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.996721  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:22.996743  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.996876  965412 main.go:141] libmachine: (bridge-284111) Calling .DriverName
	I0127 03:18:22.997362  965412 main.go:141] libmachine: (bridge-284111) Calling .DriverName
	I0127 03:18:22.997554  965412 main.go:141] libmachine: (bridge-284111) Calling .DriverName
	I0127 03:18:22.997692  965412 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 03:18:22.997726  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHHostname
	I0127 03:18:22.997831  965412 ssh_runner.go:195] Run: cat /version.json
	I0127 03:18:22.997879  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHHostname
	I0127 03:18:23.000390  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:23.000715  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:23.000748  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:23.000765  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:23.000835  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHPort
	I0127 03:18:23.001133  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:23.001212  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:23.001255  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:23.001296  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHUsername
	I0127 03:18:23.001383  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHPort
	I0127 03:18:23.001468  965412 sshutil.go:53] new ssh client: &{IP:192.168.61.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/bridge-284111/id_rsa Username:docker}
	I0127 03:18:23.001516  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:23.001641  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHUsername
	I0127 03:18:23.001749  965412 sshutil.go:53] new ssh client: &{IP:192.168.61.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/bridge-284111/id_rsa Username:docker}
	I0127 03:18:23.082154  965412 ssh_runner.go:195] Run: systemctl --version
	I0127 03:18:23.117345  965412 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 03:18:23.273868  965412 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 03:18:23.280724  965412 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 03:18:23.280787  965412 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 03:18:23.296482  965412 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 03:18:23.296511  965412 start.go:495] detecting cgroup driver to use...
	I0127 03:18:23.296594  965412 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 03:18:23.311864  965412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 03:18:23.326213  965412 docker.go:217] disabling cri-docker service (if available) ...
	I0127 03:18:23.326279  965412 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 03:18:23.340218  965412 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 03:18:23.354322  965412 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 03:18:23.476775  965412 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 03:18:23.639888  965412 docker.go:233] disabling docker service ...
	I0127 03:18:23.639952  965412 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 03:18:23.654213  965412 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 03:18:23.666393  965412 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 03:18:23.791691  965412 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 03:18:23.913216  965412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 03:18:23.928195  965412 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 03:18:23.946645  965412 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 03:18:23.946719  965412 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:18:23.956606  965412 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 03:18:23.956669  965412 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:18:23.966456  965412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:18:23.975900  965412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:18:23.985665  965412 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 03:18:23.996373  965412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:18:24.005997  965412 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:18:24.022695  965412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:18:24.032296  965412 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 03:18:24.041565  965412 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 03:18:24.041627  965412 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 03:18:24.054330  965412 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 03:18:24.064064  965412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 03:18:24.182330  965412 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 03:18:24.274584  965412 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 03:18:24.274671  965412 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 03:18:24.279679  965412 start.go:563] Will wait 60s for crictl version
	I0127 03:18:24.279736  965412 ssh_runner.go:195] Run: which crictl
	I0127 03:18:24.283480  965412 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 03:18:24.325459  965412 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 03:18:24.325556  965412 ssh_runner.go:195] Run: crio --version
	I0127 03:18:24.358736  965412 ssh_runner.go:195] Run: crio --version
	I0127 03:18:24.389379  965412 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 03:18:24.390675  965412 main.go:141] libmachine: (bridge-284111) Calling .GetIP
	I0127 03:18:24.393731  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:24.394168  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:24.394201  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:24.394421  965412 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0127 03:18:24.398415  965412 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 03:18:24.413708  965412 kubeadm.go:883] updating cluster {Name:bridge-284111 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-284111 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.61.178 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 03:18:24.413840  965412 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 03:18:24.413899  965412 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 03:18:24.444435  965412 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0127 03:18:24.444515  965412 ssh_runner.go:195] Run: which lz4
	I0127 03:18:24.448257  965412 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 03:18:24.451999  965412 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 03:18:24.452038  965412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0127 03:18:25.746010  965412 crio.go:462] duration metric: took 1.297780518s to copy over tarball
	I0127 03:18:25.746099  965412 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 03:18:28.004354  965412 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.258210919s)
	I0127 03:18:28.004393  965412 crio.go:469] duration metric: took 2.258349498s to extract the tarball
	I0127 03:18:28.004404  965412 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 03:18:28.043277  965412 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 03:18:28.083196  965412 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 03:18:28.083221  965412 cache_images.go:84] Images are preloaded, skipping loading
	I0127 03:18:28.083229  965412 kubeadm.go:934] updating node { 192.168.61.178 8443 v1.32.1 crio true true} ...
	I0127 03:18:28.083347  965412 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-284111 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.178
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:bridge-284111 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0127 03:18:28.083435  965412 ssh_runner.go:195] Run: crio config
	I0127 03:18:28.136532  965412 cni.go:84] Creating CNI manager for "bridge"
	I0127 03:18:28.136559  965412 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 03:18:28.136582  965412 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.178 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-284111 NodeName:bridge-284111 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.178"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.178 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 03:18:28.136722  965412 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.178
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-284111"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.178"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.178"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 03:18:28.136785  965412 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 03:18:28.148059  965412 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 03:18:28.148148  965412 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 03:18:28.159212  965412 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0127 03:18:28.177174  965412 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 03:18:28.194607  965412 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I0127 03:18:28.212099  965412 ssh_runner.go:195] Run: grep 192.168.61.178	control-plane.minikube.internal$ /etc/hosts
	I0127 03:18:28.216059  965412 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.178	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 03:18:28.229417  965412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 03:18:28.371410  965412 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 03:18:28.389537  965412 certs.go:68] Setting up /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111 for IP: 192.168.61.178
	I0127 03:18:28.389563  965412 certs.go:194] generating shared ca certs ...
	I0127 03:18:28.389583  965412 certs.go:226] acquiring lock for ca certs: {Name:mk2a0a86c4d196cb3cdcd1c836209954479044e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:18:28.389758  965412 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20316-897624/.minikube/ca.key
	I0127 03:18:28.389807  965412 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20316-897624/.minikube/proxy-client-ca.key
	I0127 03:18:28.389843  965412 certs.go:256] generating profile certs ...
	I0127 03:18:28.389921  965412 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/client.key
	I0127 03:18:28.389966  965412 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/client.crt with IP's: []
	I0127 03:18:28.445000  965412 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/client.crt ...
	I0127 03:18:28.445033  965412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/client.crt: {Name:mk9e7d9c51cfe9365fde4974dd819fc8a0bc2c44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:18:28.445242  965412 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/client.key ...
	I0127 03:18:28.445257  965412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/client.key: {Name:mk894eba5407f86f4d0ac29f6591849b258437b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:18:28.445372  965412 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/apiserver.key.803117fd
	I0127 03:18:28.445393  965412 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/apiserver.crt.803117fd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.178]
	I0127 03:18:28.526577  965412 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/apiserver.crt.803117fd ...
	I0127 03:18:28.526609  965412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/apiserver.crt.803117fd: {Name:mk6aec7505a30c2d0a25e9e0af381fa28e034b4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:18:28.527301  965412 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/apiserver.key.803117fd ...
	I0127 03:18:28.527321  965412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/apiserver.key.803117fd: {Name:mka5254c805742e5a010001442cf41b9cd6eb55d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:18:28.527419  965412 certs.go:381] copying /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/apiserver.crt.803117fd -> /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/apiserver.crt
	I0127 03:18:28.527506  965412 certs.go:385] copying /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/apiserver.key.803117fd -> /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/apiserver.key
	I0127 03:18:28.527579  965412 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/proxy-client.key
	I0127 03:18:28.527604  965412 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/proxy-client.crt with IP's: []
	I0127 03:18:28.748033  965412 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/proxy-client.crt ...
	I0127 03:18:28.748067  965412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/proxy-client.crt: {Name:mk5216cbd26d0be2d45e0038f200d35e4ccd2e0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:18:28.748266  965412 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/proxy-client.key ...
	I0127 03:18:28.748285  965412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/proxy-client.key: {Name:mk834e366bff2ac05f8e145b0ed8884b9ec0040a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:18:28.748490  965412 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/904889.pem (1338 bytes)
	W0127 03:18:28.748541  965412 certs.go:480] ignoring /home/jenkins/minikube-integration/20316-897624/.minikube/certs/904889_empty.pem, impossibly tiny 0 bytes
	I0127 03:18:28.748557  965412 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 03:18:28.748588  965412 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem (1078 bytes)
	I0127 03:18:28.748617  965412 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/cert.pem (1123 bytes)
	I0127 03:18:28.748649  965412 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/key.pem (1679 bytes)
	I0127 03:18:28.748699  965412 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem (1708 bytes)
	I0127 03:18:28.749391  965412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 03:18:28.774598  965412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 03:18:28.797221  965412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 03:18:28.819775  965412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 03:18:28.844206  965412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0127 03:18:28.868818  965412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 03:18:28.893782  965412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 03:18:28.918276  965412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 03:18:28.942153  965412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 03:18:28.964770  965412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/certs/904889.pem --> /usr/share/ca-certificates/904889.pem (1338 bytes)
	I0127 03:18:28.987187  965412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem --> /usr/share/ca-certificates/9048892.pem (1708 bytes)
	I0127 03:18:29.011066  965412 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 03:18:29.027191  965412 ssh_runner.go:195] Run: openssl version
	I0127 03:18:29.033146  965412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 03:18:29.044813  965412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 03:18:29.049334  965412 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 01:48 /usr/share/ca-certificates/minikubeCA.pem
	I0127 03:18:29.049405  965412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 03:18:29.055257  965412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 03:18:29.068772  965412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/904889.pem && ln -fs /usr/share/ca-certificates/904889.pem /etc/ssl/certs/904889.pem"
	I0127 03:18:29.083121  965412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/904889.pem
	I0127 03:18:29.087778  965412 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 01:57 /usr/share/ca-certificates/904889.pem
	I0127 03:18:29.087846  965412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/904889.pem
	I0127 03:18:29.095607  965412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/904889.pem /etc/ssl/certs/51391683.0"
	I0127 03:18:29.108404  965412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9048892.pem && ln -fs /usr/share/ca-certificates/9048892.pem /etc/ssl/certs/9048892.pem"
	I0127 03:18:29.123881  965412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9048892.pem
	I0127 03:18:29.130048  965412 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 01:57 /usr/share/ca-certificates/9048892.pem
	I0127 03:18:29.130122  965412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9048892.pem
	I0127 03:18:29.135495  965412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9048892.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 03:18:29.146435  965412 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 03:18:29.150627  965412 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 03:18:29.150696  965412 kubeadm.go:392] StartCluster: {Name:bridge-284111 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-284111 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.61.178 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 03:18:29.150795  965412 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 03:18:29.150878  965412 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 03:18:29.193528  965412 cri.go:89] found id: ""
	I0127 03:18:29.193616  965412 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 03:18:29.203514  965412 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 03:18:29.213077  965412 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 03:18:29.225040  965412 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 03:18:29.225067  965412 kubeadm.go:157] found existing configuration files:
	
	I0127 03:18:29.225118  965412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 03:18:29.234175  965412 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 03:18:29.234234  965412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 03:18:29.243247  965412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 03:18:29.252478  965412 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 03:18:29.252533  965412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 03:18:29.262187  965412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 03:18:29.271490  965412 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 03:18:29.271550  965412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 03:18:29.281421  965412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 03:18:29.289870  965412 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 03:18:29.289944  965412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 03:18:29.298976  965412 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 03:18:29.453263  965412 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 03:18:39.039753  965412 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 03:18:39.039835  965412 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 03:18:39.039931  965412 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 03:18:39.040064  965412 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 03:18:39.040201  965412 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 03:18:39.040292  965412 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 03:18:39.041906  965412 out.go:235]   - Generating certificates and keys ...
	I0127 03:18:39.042004  965412 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 03:18:39.042097  965412 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 03:18:39.042190  965412 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 03:18:39.042251  965412 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0127 03:18:39.042319  965412 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0127 03:18:39.042370  965412 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0127 03:18:39.042423  965412 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0127 03:18:39.042563  965412 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-284111 localhost] and IPs [192.168.61.178 127.0.0.1 ::1]
	I0127 03:18:39.042626  965412 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0127 03:18:39.042798  965412 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-284111 localhost] and IPs [192.168.61.178 127.0.0.1 ::1]
	I0127 03:18:39.042911  965412 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 03:18:39.043006  965412 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 03:18:39.043074  965412 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0127 03:18:39.043158  965412 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 03:18:39.043267  965412 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 03:18:39.043359  965412 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 03:18:39.043439  965412 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 03:18:39.043526  965412 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 03:18:39.043598  965412 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 03:18:39.043710  965412 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 03:18:39.043807  965412 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 03:18:39.045144  965412 out.go:235]   - Booting up control plane ...
	I0127 03:18:39.045244  965412 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 03:18:39.045327  965412 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 03:18:39.045407  965412 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 03:18:39.045550  965412 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 03:18:39.045646  965412 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 03:18:39.045707  965412 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 03:18:39.045807  965412 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 03:18:39.045898  965412 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 03:18:39.045994  965412 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 503.82396ms
	I0127 03:18:39.046096  965412 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 03:18:39.046186  965412 kubeadm.go:310] [api-check] The API server is healthy after 5.003089327s
	I0127 03:18:39.046295  965412 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 03:18:39.046472  965412 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 03:18:39.046560  965412 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 03:18:39.046735  965412 kubeadm.go:310] [mark-control-plane] Marking the node bridge-284111 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 03:18:39.046819  965412 kubeadm.go:310] [bootstrap-token] Using token: 9vz6c7.t2ey9xa65s2m5rce
	I0127 03:18:39.048225  965412 out.go:235]   - Configuring RBAC rules ...
	I0127 03:18:39.048342  965412 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 03:18:39.048430  965412 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 03:18:39.048558  965412 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 03:18:39.048663  965412 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 03:18:39.048758  965412 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 03:18:39.048829  965412 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 03:18:39.048972  965412 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 03:18:39.049013  965412 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 03:18:39.049058  965412 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 03:18:39.049064  965412 kubeadm.go:310] 
	I0127 03:18:39.049117  965412 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 03:18:39.049123  965412 kubeadm.go:310] 
	I0127 03:18:39.049204  965412 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 03:18:39.049211  965412 kubeadm.go:310] 
	I0127 03:18:39.049232  965412 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 03:18:39.049289  965412 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 03:18:39.049374  965412 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 03:18:39.049387  965412 kubeadm.go:310] 
	I0127 03:18:39.049462  965412 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 03:18:39.049472  965412 kubeadm.go:310] 
	I0127 03:18:39.049547  965412 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 03:18:39.049555  965412 kubeadm.go:310] 
	I0127 03:18:39.049628  965412 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 03:18:39.049755  965412 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 03:18:39.049867  965412 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 03:18:39.049877  965412 kubeadm.go:310] 
	I0127 03:18:39.049992  965412 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 03:18:39.050101  965412 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 03:18:39.050111  965412 kubeadm.go:310] 
	I0127 03:18:39.050182  965412 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 9vz6c7.t2ey9xa65s2m5rce \
	I0127 03:18:39.050284  965412 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9e03cefec8c3986c8071c28cf6fb7c7bbd45830c579dcdbae8e6ec1676091320 \
	I0127 03:18:39.050318  965412 kubeadm.go:310] 	--control-plane 
	I0127 03:18:39.050325  965412 kubeadm.go:310] 
	I0127 03:18:39.050393  965412 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 03:18:39.050399  965412 kubeadm.go:310] 
	I0127 03:18:39.050483  965412 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9vz6c7.t2ey9xa65s2m5rce \
	I0127 03:18:39.050641  965412 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9e03cefec8c3986c8071c28cf6fb7c7bbd45830c579dcdbae8e6ec1676091320 
	I0127 03:18:39.050656  965412 cni.go:84] Creating CNI manager for "bridge"
	I0127 03:18:39.052074  965412 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 03:18:39.053180  965412 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 03:18:39.065430  965412 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 03:18:39.085517  965412 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 03:18:39.085626  965412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:18:39.085655  965412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-284111 minikube.k8s.io/updated_at=2025_01_27T03_18_39_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=6bb462d349d93b9bf1c5a4f87817e5e9ea11cc95 minikube.k8s.io/name=bridge-284111 minikube.k8s.io/primary=true
	I0127 03:18:39.236877  965412 ops.go:34] apiserver oom_adj: -16
	I0127 03:18:39.239687  965412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:18:39.739742  965412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:18:40.240439  965412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:18:40.740627  965412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:18:41.240543  965412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:18:41.740802  965412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:18:42.239814  965412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:18:42.740769  965412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:18:43.239766  965412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:18:43.362731  965412 kubeadm.go:1113] duration metric: took 4.27717357s to wait for elevateKubeSystemPrivileges
	I0127 03:18:43.362780  965412 kubeadm.go:394] duration metric: took 14.212089282s to StartCluster
	I0127 03:18:43.362819  965412 settings.go:142] acquiring lock: {Name:mk329e04da29656a59534015ea2d4cccfe5debac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:18:43.362902  965412 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20316-897624/kubeconfig
	I0127 03:18:43.364337  965412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/kubeconfig: {Name:mkcd87672341028e989830590061bc5270733978 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:18:43.364571  965412 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.178 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 03:18:43.364601  965412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0127 03:18:43.364623  965412 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 03:18:43.364821  965412 addons.go:69] Setting storage-provisioner=true in profile "bridge-284111"
	I0127 03:18:43.364832  965412 config.go:182] Loaded profile config "bridge-284111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 03:18:43.364844  965412 addons.go:238] Setting addon storage-provisioner=true in "bridge-284111"
	I0127 03:18:43.364884  965412 host.go:66] Checking if "bridge-284111" exists ...
	I0127 03:18:43.364893  965412 addons.go:69] Setting default-storageclass=true in profile "bridge-284111"
	I0127 03:18:43.364911  965412 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-284111"
	I0127 03:18:43.365434  965412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:18:43.365478  965412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:18:43.365434  965412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:18:43.365586  965412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:18:43.366316  965412 out.go:177] * Verifying Kubernetes components...
	I0127 03:18:43.367578  965412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 03:18:43.382144  965412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34325
	I0127 03:18:43.382166  965412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33037
	I0127 03:18:43.382709  965412 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:18:43.382710  965412 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:18:43.383321  965412 main.go:141] libmachine: Using API Version  1
	I0127 03:18:43.383343  965412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:18:43.383326  965412 main.go:141] libmachine: Using API Version  1
	I0127 03:18:43.383448  965412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:18:43.383802  965412 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:18:43.383802  965412 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:18:43.384068  965412 main.go:141] libmachine: (bridge-284111) Calling .GetState
	I0127 03:18:43.384497  965412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:18:43.384547  965412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:18:43.388396  965412 addons.go:238] Setting addon default-storageclass=true in "bridge-284111"
	I0127 03:18:43.388448  965412 host.go:66] Checking if "bridge-284111" exists ...
	I0127 03:18:43.388836  965412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:18:43.388888  965412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:18:43.401487  965412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34511
	I0127 03:18:43.401963  965412 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:18:43.402532  965412 main.go:141] libmachine: Using API Version  1
	I0127 03:18:43.402555  965412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:18:43.402948  965412 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:18:43.403176  965412 main.go:141] libmachine: (bridge-284111) Calling .GetState
	I0127 03:18:43.405227  965412 main.go:141] libmachine: (bridge-284111) Calling .DriverName
	I0127 03:18:43.406011  965412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43783
	I0127 03:18:43.406386  965412 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:18:43.406864  965412 main.go:141] libmachine: Using API Version  1
	I0127 03:18:43.406895  965412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:18:43.407221  965412 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:18:43.407649  965412 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 03:18:43.407895  965412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:18:43.407952  965412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:18:43.409292  965412 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 03:18:43.409316  965412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 03:18:43.409339  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHHostname
	I0127 03:18:43.413101  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:43.413591  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:43.413629  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:43.414006  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHPort
	I0127 03:18:43.414216  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:43.414393  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHUsername
	I0127 03:18:43.414580  965412 sshutil.go:53] new ssh client: &{IP:192.168.61.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/bridge-284111/id_rsa Username:docker}
	I0127 03:18:43.427369  965412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44983
	I0127 03:18:43.427939  965412 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:18:43.429588  965412 main.go:141] libmachine: Using API Version  1
	I0127 03:18:43.429624  965412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:18:43.430052  965412 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:18:43.430287  965412 main.go:141] libmachine: (bridge-284111) Calling .GetState
	I0127 03:18:43.432335  965412 main.go:141] libmachine: (bridge-284111) Calling .DriverName
	I0127 03:18:43.432595  965412 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 03:18:43.432622  965412 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 03:18:43.432642  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHHostname
	I0127 03:18:43.436101  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:43.436528  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:43.436573  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:43.436690  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHPort
	I0127 03:18:43.436907  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:43.437126  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHUsername
	I0127 03:18:43.437286  965412 sshutil.go:53] new ssh client: &{IP:192.168.61.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/bridge-284111/id_rsa Username:docker}
	I0127 03:18:43.623874  965412 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 03:18:43.623927  965412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0127 03:18:43.650661  965412 node_ready.go:35] waiting up to 15m0s for node "bridge-284111" to be "Ready" ...
	I0127 03:18:43.667546  965412 node_ready.go:49] node "bridge-284111" has status "Ready":"True"
	I0127 03:18:43.667583  965412 node_ready.go:38] duration metric: took 16.886127ms for node "bridge-284111" to be "Ready" ...
	I0127 03:18:43.667599  965412 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:18:43.687207  965412 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-gmvqc" in "kube-system" namespace to be "Ready" ...
	I0127 03:18:43.743454  965412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 03:18:43.814389  965412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 03:18:44.280907  965412 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0127 03:18:44.793593  965412 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-284111" context rescaled to 1 replicas
	I0127 03:18:44.833718  965412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.09022136s)
	I0127 03:18:44.833772  965412 main.go:141] libmachine: Making call to close driver server
	I0127 03:18:44.833809  965412 main.go:141] libmachine: (bridge-284111) Calling .Close
	I0127 03:18:44.833861  965412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.019432049s)
	I0127 03:18:44.833920  965412 main.go:141] libmachine: Making call to close driver server
	I0127 03:18:44.833938  965412 main.go:141] libmachine: (bridge-284111) Calling .Close
	I0127 03:18:44.834133  965412 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:18:44.834152  965412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:18:44.834178  965412 main.go:141] libmachine: Making call to close driver server
	I0127 03:18:44.834186  965412 main.go:141] libmachine: (bridge-284111) Calling .Close
	I0127 03:18:44.834409  965412 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:18:44.834427  965412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:18:44.834450  965412 main.go:141] libmachine: Making call to close driver server
	I0127 03:18:44.834446  965412 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:18:44.834458  965412 main.go:141] libmachine: (bridge-284111) Calling .Close
	I0127 03:18:44.834464  965412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:18:44.834668  965412 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:18:44.834701  965412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:18:44.848046  965412 main.go:141] libmachine: Making call to close driver server
	I0127 03:18:44.848123  965412 main.go:141] libmachine: (bridge-284111) Calling .Close
	I0127 03:18:44.849692  965412 main.go:141] libmachine: (bridge-284111) DBG | Closing plugin on server side
	I0127 03:18:44.849714  965412 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:18:44.849724  965412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:18:44.852448  965412 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0127 03:18:44.853648  965412 addons.go:514] duration metric: took 1.489024932s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0127 03:18:45.694816  965412 pod_ready.go:103] pod "coredns-668d6bf9bc-gmvqc" in "kube-system" namespace has status "Ready":"False"
	I0127 03:18:46.193044  965412 pod_ready.go:93] pod "coredns-668d6bf9bc-gmvqc" in "kube-system" namespace has status "Ready":"True"
	I0127 03:18:46.193071  965412 pod_ready.go:82] duration metric: took 2.505825793s for pod "coredns-668d6bf9bc-gmvqc" in "kube-system" namespace to be "Ready" ...
	I0127 03:18:46.193081  965412 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-tngtp" in "kube-system" namespace to be "Ready" ...
	I0127 03:18:48.199298  965412 pod_ready.go:103] pod "coredns-668d6bf9bc-tngtp" in "kube-system" namespace has status "Ready":"False"
	I0127 03:18:50.699488  965412 pod_ready.go:103] pod "coredns-668d6bf9bc-tngtp" in "kube-system" namespace has status "Ready":"False"
	I0127 03:18:53.198865  965412 pod_ready.go:103] pod "coredns-668d6bf9bc-tngtp" in "kube-system" namespace has status "Ready":"False"
	I0127 03:18:55.199017  965412 pod_ready.go:98] pod "coredns-668d6bf9bc-tngtp" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 03:18:55 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 03:18:43 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 03:18:43 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 03:18:43 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 03:18:43 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.61.178 HostIPs:[{IP:192.168.61
.178}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2025-01-27 03:18:43 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-01-27 03:18:44 +0000 UTC,FinishedAt:2025-01-27 03:18:54 +0000 UTC,ContainerID:cri-o://60e562f4495ff75f3e610df943b3b20869f7e242b31f7da67bc814acae63373e,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://60e562f4495ff75f3e610df943b3b20869f7e242b31f7da67bc814acae63373e Started:0xc00208e700 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc00250df50} {Name:kube-api-access-qcgg5 MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc00250df60}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0127 03:18:55.199049  965412 pod_ready.go:82] duration metric: took 9.005962015s for pod "coredns-668d6bf9bc-tngtp" in "kube-system" namespace to be "Ready" ...
	E0127 03:18:55.199068  965412 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-668d6bf9bc-tngtp" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 03:18:55 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 03:18:43 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 03:18:43 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 03:18:43 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 03:18:43 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.6
1.178 HostIPs:[{IP:192.168.61.178}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2025-01-27 03:18:43 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-01-27 03:18:44 +0000 UTC,FinishedAt:2025-01-27 03:18:54 +0000 UTC,ContainerID:cri-o://60e562f4495ff75f3e610df943b3b20869f7e242b31f7da67bc814acae63373e,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://60e562f4495ff75f3e610df943b3b20869f7e242b31f7da67bc814acae63373e Started:0xc00208e700 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc00250df50} {Name:kube-api-access-qcgg5 MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc00250df60}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0127 03:18:55.199080  965412 pod_ready.go:79] waiting up to 15m0s for pod "etcd-bridge-284111" in "kube-system" namespace to be "Ready" ...
	I0127 03:18:55.203029  965412 pod_ready.go:93] pod "etcd-bridge-284111" in "kube-system" namespace has status "Ready":"True"
	I0127 03:18:55.203055  965412 pod_ready.go:82] duration metric: took 3.966832ms for pod "etcd-bridge-284111" in "kube-system" namespace to be "Ready" ...
	I0127 03:18:55.203069  965412 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-bridge-284111" in "kube-system" namespace to be "Ready" ...
	I0127 03:18:55.208264  965412 pod_ready.go:93] pod "kube-apiserver-bridge-284111" in "kube-system" namespace has status "Ready":"True"
	I0127 03:18:55.208286  965412 pod_ready.go:82] duration metric: took 5.209412ms for pod "kube-apiserver-bridge-284111" in "kube-system" namespace to be "Ready" ...
	I0127 03:18:55.208296  965412 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-bridge-284111" in "kube-system" namespace to be "Ready" ...
	I0127 03:18:55.215716  965412 pod_ready.go:93] pod "kube-controller-manager-bridge-284111" in "kube-system" namespace has status "Ready":"True"
	I0127 03:18:55.215737  965412 pod_ready.go:82] duration metric: took 7.434091ms for pod "kube-controller-manager-bridge-284111" in "kube-system" namespace to be "Ready" ...
	I0127 03:18:55.215747  965412 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-hrrdg" in "kube-system" namespace to be "Ready" ...
	I0127 03:18:55.220146  965412 pod_ready.go:93] pod "kube-proxy-hrrdg" in "kube-system" namespace has status "Ready":"True"
	I0127 03:18:55.220172  965412 pod_ready.go:82] duration metric: took 4.416975ms for pod "kube-proxy-hrrdg" in "kube-system" namespace to be "Ready" ...
	I0127 03:18:55.220184  965412 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-bridge-284111" in "kube-system" namespace to be "Ready" ...
	I0127 03:18:55.601116  965412 pod_ready.go:93] pod "kube-scheduler-bridge-284111" in "kube-system" namespace has status "Ready":"True"
	I0127 03:18:55.601153  965412 pod_ready.go:82] duration metric: took 380.959358ms for pod "kube-scheduler-bridge-284111" in "kube-system" namespace to be "Ready" ...
	I0127 03:18:55.601167  965412 pod_ready.go:39] duration metric: took 11.933546372s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:18:55.601190  965412 api_server.go:52] waiting for apiserver process to appear ...
	I0127 03:18:55.601249  965412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:18:55.615311  965412 api_server.go:72] duration metric: took 12.250702622s to wait for apiserver process to appear ...
	I0127 03:18:55.615353  965412 api_server.go:88] waiting for apiserver healthz status ...
	I0127 03:18:55.615381  965412 api_server.go:253] Checking apiserver healthz at https://192.168.61.178:8443/healthz ...
	I0127 03:18:55.620633  965412 api_server.go:279] https://192.168.61.178:8443/healthz returned 200:
	ok
	I0127 03:18:55.621585  965412 api_server.go:141] control plane version: v1.32.1
	I0127 03:18:55.621610  965412 api_server.go:131] duration metric: took 6.249694ms to wait for apiserver health ...
	I0127 03:18:55.621618  965412 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 03:18:55.799117  965412 system_pods.go:59] 7 kube-system pods found
	I0127 03:18:55.799150  965412 system_pods.go:61] "coredns-668d6bf9bc-gmvqc" [7dc10376-b832-464e-b10c-89b6155e400a] Running
	I0127 03:18:55.799155  965412 system_pods.go:61] "etcd-bridge-284111" [f8ec6710-5283-4718-a4e5-986b10e7e9e4] Running
	I0127 03:18:55.799159  965412 system_pods.go:61] "kube-apiserver-bridge-284111" [a225e7f8-68a1-4504-8878-cb6ed04545b7] Running
	I0127 03:18:55.799163  965412 system_pods.go:61] "kube-controller-manager-bridge-284111" [a1562f85-9d4e-40bc-b33e-940d1c89fdeb] Running
	I0127 03:18:55.799166  965412 system_pods.go:61] "kube-proxy-hrrdg" [ee95d2f3-c1f4-4d76-a62f-d9e1d344948c] Running
	I0127 03:18:55.799170  965412 system_pods.go:61] "kube-scheduler-bridge-284111" [7a24aa20-3ad9-4968-8f58-512f9bc5d261] Running
	I0127 03:18:55.799173  965412 system_pods.go:61] "storage-provisioner" [bc4e4b69-bac8-4bab-a965-ac49ae78efe4] Running
	I0127 03:18:55.799180  965412 system_pods.go:74] duration metric: took 177.555316ms to wait for pod list to return data ...
	I0127 03:18:55.799187  965412 default_sa.go:34] waiting for default service account to be created ...
	I0127 03:18:55.996306  965412 default_sa.go:45] found service account: "default"
	I0127 03:18:55.996333  965412 default_sa.go:55] duration metric: took 197.140724ms for default service account to be created ...
	I0127 03:18:55.996343  965412 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 03:18:56.198691  965412 system_pods.go:87] 7 kube-system pods found
	I0127 03:18:56.397259  965412 system_pods.go:105] "coredns-668d6bf9bc-gmvqc" [7dc10376-b832-464e-b10c-89b6155e400a] Running
	I0127 03:18:56.397285  965412 system_pods.go:105] "etcd-bridge-284111" [f8ec6710-5283-4718-a4e5-986b10e7e9e4] Running
	I0127 03:18:56.397291  965412 system_pods.go:105] "kube-apiserver-bridge-284111" [a225e7f8-68a1-4504-8878-cb6ed04545b7] Running
	I0127 03:18:56.397296  965412 system_pods.go:105] "kube-controller-manager-bridge-284111" [a1562f85-9d4e-40bc-b33e-940d1c89fdeb] Running
	I0127 03:18:56.397302  965412 system_pods.go:105] "kube-proxy-hrrdg" [ee95d2f3-c1f4-4d76-a62f-d9e1d344948c] Running
	I0127 03:18:56.397306  965412 system_pods.go:105] "kube-scheduler-bridge-284111" [7a24aa20-3ad9-4968-8f58-512f9bc5d261] Running
	I0127 03:18:56.397310  965412 system_pods.go:105] "storage-provisioner" [bc4e4b69-bac8-4bab-a965-ac49ae78efe4] Running
	I0127 03:18:56.397318  965412 system_pods.go:147] duration metric: took 400.968435ms to wait for k8s-apps to be running ...
	I0127 03:18:56.397325  965412 system_svc.go:44] waiting for kubelet service to be running ....
	I0127 03:18:56.397373  965412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 03:18:56.413149  965412 system_svc.go:56] duration metric: took 15.80669ms WaitForService to wait for kubelet
	I0127 03:18:56.413188  965412 kubeadm.go:582] duration metric: took 13.048583267s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 03:18:56.413230  965412 node_conditions.go:102] verifying NodePressure condition ...
	I0127 03:18:56.596472  965412 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 03:18:56.596506  965412 node_conditions.go:123] node cpu capacity is 2
	I0127 03:18:56.596519  965412 node_conditions.go:105] duration metric: took 183.283498ms to run NodePressure ...
	I0127 03:18:56.596532  965412 start.go:241] waiting for startup goroutines ...
	I0127 03:18:56.596538  965412 start.go:246] waiting for cluster config update ...
	I0127 03:18:56.596548  965412 start.go:255] writing updated cluster config ...
	I0127 03:18:56.596809  965412 ssh_runner.go:195] Run: rm -f paused
	I0127 03:18:56.647143  965412 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 03:18:56.649661  965412 out.go:177] * Done! kubectl is now configured to use "bridge-284111" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 27 03:32:50 default-k8s-diff-port-150897 crio[733]: time="2025-01-27 03:32:50.855984032Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737948770855964319,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4e475cfd-80e7-415e-ae33-d35cd447a79d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 03:32:50 default-k8s-diff-port-150897 crio[733]: time="2025-01-27 03:32:50.856512313Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=87141ba0-5e3f-44f0-854f-9aa6245ef96e name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:32:50 default-k8s-diff-port-150897 crio[733]: time="2025-01-27 03:32:50.856578603Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=87141ba0-5e3f-44f0-854f-9aa6245ef96e name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:32:50 default-k8s-diff-port-150897 crio[733]: time="2025-01-27 03:32:50.856821065Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:345490fce879a9444a203a96c8a542def5f368d790d64e66730a720aa6445b07,PodSandboxId:4bb83726a4497555b8df34ce95cac5af4426ae1103bd5678d9340264908427bb,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737948472244044164,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-qdlls,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 36da8bf5-7b6a-49e0-89f5-1159b4e9b719,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ac96e389aec69bf8de85f4293b206fcaa911d3c8c997b4d66fa54d98c8996cb,PodSandboxId:d76a0d0cfa3f1bfdf6c3062d8fcd37ef9908126644c2222b7e333c79a7690be4,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737947508999098324,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-cg665,io.kubernetes.pod.namespace: kubernetes-dashboard
,io.kubernetes.pod.uid: e1d58bcf-aec6-41eb-a192-e7a9292a03d0,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5d931d592032e6ab5b13767e5c76aaef0d91115aee777e1c28fd3441735ea1d,PodSandboxId:51ea672c06965746fd52dd3ab8121f8afdb0c30de93273d8fd4f8fe875bce12f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737947487378703331,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod
.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1d711a0-b203-4f87-9b3e-42cc4ec4c4e9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98eefd9696714738ec49e72f7c372e699c234600a40cc9f46749c6d2bc03727e,PodSandboxId:f9a6695e97d4a9f7452e5a1737e968462b3fefedadf552cc25f087f4a105f3f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737947486869315474,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-w79lb,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84895192-c7b3-4307-92ce-e5a874d1c151,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1912ed616e990b24edb951b1764e6f39aad3b82f256a1c6e367f6d980e75de8,PodSandboxId:a4bd7849a6704569fe8f8c3fabbe3b358b123e817f0200135990517bac8a2eb3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f
91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737947486749580354,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-bhmkn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3df7494-99db-492d-852f-b3019a4a5f59,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d3b1d7510241468091ce1f4a1fb7123365b444c0daa20f5297e38556bf727fa,PodSandboxId:a2f3e78379ae3bcadd4b852fa73219e16ee37f0a0ca3e1c9f371f7ccb996dab2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&Im
ageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737947485475853510,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-br56d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de9065a-ccfd-497e-859b-f6a6de73e192,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b572b5fb41f7177df759ea1bce10162f23c624db147e7c3f5b738176c98bc5e,PodSandboxId:4764b4a4f9de8980229d7972d14d572a68daf8f96655266d578ad2db306aa702,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d
956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737947475033793386,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-150897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc8ba94d0449f6a668de4cde66bb660e,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdd39d11bd613b6ed9096b29f25ce04b1e36410fcca60670738c3fa9c1397ab2,PodSandboxId:6abd6ef262065fb26eb1a92085fa05e3cd17a69bd2e04f2aae250d9f3f27529d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe55563945538
74934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737947475113107549,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-150897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c57d675b4c1bb7464691ce70d6c5eeac,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aee5849dc291c74abbc0a13d73a2eb55cdf4d75996245def05789f3a0a6d1449,PodSandboxId:285f9c9c1c92e3003e2074ab482644dd7f82522f3d2478d3bbf2cb17614ee88e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4b
f959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737947475054963448,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-150897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 969d4c94c184da0048db81e4a0aa05b6,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:011d18ded487d53acb42a84e14ec02460835eb9ce73ef1bdb88549b7e7bb70a9,PodSandboxId:0da74c1caf6bc808dcca142dcab0c6d4b4144eadbcf3e94043cb9e0e983111a6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c311
3e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737947475009921379,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-150897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39c24e87a960415b5647b54078ec6fe6,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:072732b659d19163968526f4b6426f5b5441646f989b55ca6539f061649a2b06,PodSandboxId:d3c899c737a4c496f2b3fe71b6f21b06190b7af873c892776b1e3cb3c38c8908,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c0
4427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737947186329278956,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-150897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c57d675b4c1bb7464691ce70d6c5eeac,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=87141ba0-5e3f-44f0-854f-9aa6245ef96e name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:32:50 default-k8s-diff-port-150897 crio[733]: time="2025-01-27 03:32:50.894970330Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7349785b-5387-485b-8a33-dade56aae256 name=/runtime.v1.RuntimeService/Version
	Jan 27 03:32:50 default-k8s-diff-port-150897 crio[733]: time="2025-01-27 03:32:50.895064632Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7349785b-5387-485b-8a33-dade56aae256 name=/runtime.v1.RuntimeService/Version
	Jan 27 03:32:50 default-k8s-diff-port-150897 crio[733]: time="2025-01-27 03:32:50.896243953Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9e7751bd-0dc0-47f3-95d4-c35d19a4d706 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 03:32:50 default-k8s-diff-port-150897 crio[733]: time="2025-01-27 03:32:50.896804515Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737948770896779717,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9e7751bd-0dc0-47f3-95d4-c35d19a4d706 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 03:32:50 default-k8s-diff-port-150897 crio[733]: time="2025-01-27 03:32:50.897423266Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b1ba7870-a0e7-484d-b609-e6488e2af906 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:32:50 default-k8s-diff-port-150897 crio[733]: time="2025-01-27 03:32:50.897487696Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b1ba7870-a0e7-484d-b609-e6488e2af906 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:32:50 default-k8s-diff-port-150897 crio[733]: time="2025-01-27 03:32:50.897903572Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:345490fce879a9444a203a96c8a542def5f368d790d64e66730a720aa6445b07,PodSandboxId:4bb83726a4497555b8df34ce95cac5af4426ae1103bd5678d9340264908427bb,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737948472244044164,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-qdlls,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 36da8bf5-7b6a-49e0-89f5-1159b4e9b719,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ac96e389aec69bf8de85f4293b206fcaa911d3c8c997b4d66fa54d98c8996cb,PodSandboxId:d76a0d0cfa3f1bfdf6c3062d8fcd37ef9908126644c2222b7e333c79a7690be4,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737947508999098324,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-cg665,io.kubernetes.pod.namespace: kubernetes-dashboard
,io.kubernetes.pod.uid: e1d58bcf-aec6-41eb-a192-e7a9292a03d0,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5d931d592032e6ab5b13767e5c76aaef0d91115aee777e1c28fd3441735ea1d,PodSandboxId:51ea672c06965746fd52dd3ab8121f8afdb0c30de93273d8fd4f8fe875bce12f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737947487378703331,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod
.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1d711a0-b203-4f87-9b3e-42cc4ec4c4e9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98eefd9696714738ec49e72f7c372e699c234600a40cc9f46749c6d2bc03727e,PodSandboxId:f9a6695e97d4a9f7452e5a1737e968462b3fefedadf552cc25f087f4a105f3f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737947486869315474,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-w79lb,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84895192-c7b3-4307-92ce-e5a874d1c151,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1912ed616e990b24edb951b1764e6f39aad3b82f256a1c6e367f6d980e75de8,PodSandboxId:a4bd7849a6704569fe8f8c3fabbe3b358b123e817f0200135990517bac8a2eb3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f
91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737947486749580354,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-bhmkn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3df7494-99db-492d-852f-b3019a4a5f59,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d3b1d7510241468091ce1f4a1fb7123365b444c0daa20f5297e38556bf727fa,PodSandboxId:a2f3e78379ae3bcadd4b852fa73219e16ee37f0a0ca3e1c9f371f7ccb996dab2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&Im
ageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737947485475853510,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-br56d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de9065a-ccfd-497e-859b-f6a6de73e192,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b572b5fb41f7177df759ea1bce10162f23c624db147e7c3f5b738176c98bc5e,PodSandboxId:4764b4a4f9de8980229d7972d14d572a68daf8f96655266d578ad2db306aa702,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d
956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737947475033793386,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-150897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc8ba94d0449f6a668de4cde66bb660e,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdd39d11bd613b6ed9096b29f25ce04b1e36410fcca60670738c3fa9c1397ab2,PodSandboxId:6abd6ef262065fb26eb1a92085fa05e3cd17a69bd2e04f2aae250d9f3f27529d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe55563945538
74934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737947475113107549,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-150897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c57d675b4c1bb7464691ce70d6c5eeac,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aee5849dc291c74abbc0a13d73a2eb55cdf4d75996245def05789f3a0a6d1449,PodSandboxId:285f9c9c1c92e3003e2074ab482644dd7f82522f3d2478d3bbf2cb17614ee88e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4b
f959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737947475054963448,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-150897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 969d4c94c184da0048db81e4a0aa05b6,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:011d18ded487d53acb42a84e14ec02460835eb9ce73ef1bdb88549b7e7bb70a9,PodSandboxId:0da74c1caf6bc808dcca142dcab0c6d4b4144eadbcf3e94043cb9e0e983111a6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c311
3e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737947475009921379,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-150897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39c24e87a960415b5647b54078ec6fe6,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:072732b659d19163968526f4b6426f5b5441646f989b55ca6539f061649a2b06,PodSandboxId:d3c899c737a4c496f2b3fe71b6f21b06190b7af873c892776b1e3cb3c38c8908,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c0
4427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737947186329278956,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-150897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c57d675b4c1bb7464691ce70d6c5eeac,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b1ba7870-a0e7-484d-b609-e6488e2af906 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:32:50 default-k8s-diff-port-150897 crio[733]: time="2025-01-27 03:32:50.934679218Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aacee601-319f-47d5-a571-15f85272c660 name=/runtime.v1.RuntimeService/Version
	Jan 27 03:32:50 default-k8s-diff-port-150897 crio[733]: time="2025-01-27 03:32:50.934769683Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aacee601-319f-47d5-a571-15f85272c660 name=/runtime.v1.RuntimeService/Version
	Jan 27 03:32:50 default-k8s-diff-port-150897 crio[733]: time="2025-01-27 03:32:50.936082428Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c80348a2-f9ce-4b6e-b532-c6a17cb4d84d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 03:32:50 default-k8s-diff-port-150897 crio[733]: time="2025-01-27 03:32:50.936844007Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737948770936815120,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c80348a2-f9ce-4b6e-b532-c6a17cb4d84d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 03:32:50 default-k8s-diff-port-150897 crio[733]: time="2025-01-27 03:32:50.937565011Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6fb1001d-dccc-4fc6-a00f-7e88d9dfdfc8 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:32:50 default-k8s-diff-port-150897 crio[733]: time="2025-01-27 03:32:50.937618189Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6fb1001d-dccc-4fc6-a00f-7e88d9dfdfc8 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:32:50 default-k8s-diff-port-150897 crio[733]: time="2025-01-27 03:32:50.937856020Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:345490fce879a9444a203a96c8a542def5f368d790d64e66730a720aa6445b07,PodSandboxId:4bb83726a4497555b8df34ce95cac5af4426ae1103bd5678d9340264908427bb,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737948472244044164,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-qdlls,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 36da8bf5-7b6a-49e0-89f5-1159b4e9b719,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ac96e389aec69bf8de85f4293b206fcaa911d3c8c997b4d66fa54d98c8996cb,PodSandboxId:d76a0d0cfa3f1bfdf6c3062d8fcd37ef9908126644c2222b7e333c79a7690be4,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737947508999098324,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-cg665,io.kubernetes.pod.namespace: kubernetes-dashboard
,io.kubernetes.pod.uid: e1d58bcf-aec6-41eb-a192-e7a9292a03d0,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5d931d592032e6ab5b13767e5c76aaef0d91115aee777e1c28fd3441735ea1d,PodSandboxId:51ea672c06965746fd52dd3ab8121f8afdb0c30de93273d8fd4f8fe875bce12f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737947487378703331,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod
.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1d711a0-b203-4f87-9b3e-42cc4ec4c4e9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98eefd9696714738ec49e72f7c372e699c234600a40cc9f46749c6d2bc03727e,PodSandboxId:f9a6695e97d4a9f7452e5a1737e968462b3fefedadf552cc25f087f4a105f3f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737947486869315474,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-w79lb,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84895192-c7b3-4307-92ce-e5a874d1c151,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1912ed616e990b24edb951b1764e6f39aad3b82f256a1c6e367f6d980e75de8,PodSandboxId:a4bd7849a6704569fe8f8c3fabbe3b358b123e817f0200135990517bac8a2eb3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f
91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737947486749580354,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-bhmkn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3df7494-99db-492d-852f-b3019a4a5f59,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d3b1d7510241468091ce1f4a1fb7123365b444c0daa20f5297e38556bf727fa,PodSandboxId:a2f3e78379ae3bcadd4b852fa73219e16ee37f0a0ca3e1c9f371f7ccb996dab2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&Im
ageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737947485475853510,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-br56d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de9065a-ccfd-497e-859b-f6a6de73e192,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b572b5fb41f7177df759ea1bce10162f23c624db147e7c3f5b738176c98bc5e,PodSandboxId:4764b4a4f9de8980229d7972d14d572a68daf8f96655266d578ad2db306aa702,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d
956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737947475033793386,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-150897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc8ba94d0449f6a668de4cde66bb660e,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdd39d11bd613b6ed9096b29f25ce04b1e36410fcca60670738c3fa9c1397ab2,PodSandboxId:6abd6ef262065fb26eb1a92085fa05e3cd17a69bd2e04f2aae250d9f3f27529d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe55563945538
74934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737947475113107549,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-150897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c57d675b4c1bb7464691ce70d6c5eeac,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aee5849dc291c74abbc0a13d73a2eb55cdf4d75996245def05789f3a0a6d1449,PodSandboxId:285f9c9c1c92e3003e2074ab482644dd7f82522f3d2478d3bbf2cb17614ee88e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4b
f959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737947475054963448,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-150897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 969d4c94c184da0048db81e4a0aa05b6,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:011d18ded487d53acb42a84e14ec02460835eb9ce73ef1bdb88549b7e7bb70a9,PodSandboxId:0da74c1caf6bc808dcca142dcab0c6d4b4144eadbcf3e94043cb9e0e983111a6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c311
3e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737947475009921379,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-150897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39c24e87a960415b5647b54078ec6fe6,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:072732b659d19163968526f4b6426f5b5441646f989b55ca6539f061649a2b06,PodSandboxId:d3c899c737a4c496f2b3fe71b6f21b06190b7af873c892776b1e3cb3c38c8908,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c0
4427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737947186329278956,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-150897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c57d675b4c1bb7464691ce70d6c5eeac,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6fb1001d-dccc-4fc6-a00f-7e88d9dfdfc8 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:32:50 default-k8s-diff-port-150897 crio[733]: time="2025-01-27 03:32:50.973502116Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=58ff344e-204f-4fc5-9c3c-3bde3a21a553 name=/runtime.v1.RuntimeService/Version
	Jan 27 03:32:50 default-k8s-diff-port-150897 crio[733]: time="2025-01-27 03:32:50.973572404Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=58ff344e-204f-4fc5-9c3c-3bde3a21a553 name=/runtime.v1.RuntimeService/Version
	Jan 27 03:32:50 default-k8s-diff-port-150897 crio[733]: time="2025-01-27 03:32:50.974700593Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b6b9a1a0-f24f-4bf2-b18d-45059902f171 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 03:32:50 default-k8s-diff-port-150897 crio[733]: time="2025-01-27 03:32:50.975127192Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737948770975107336,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b6b9a1a0-f24f-4bf2-b18d-45059902f171 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 03:32:50 default-k8s-diff-port-150897 crio[733]: time="2025-01-27 03:32:50.975840402Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8470d7ce-e025-4ba5-9b10-daa73ffc6dff name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:32:50 default-k8s-diff-port-150897 crio[733]: time="2025-01-27 03:32:50.975892242Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8470d7ce-e025-4ba5-9b10-daa73ffc6dff name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:32:50 default-k8s-diff-port-150897 crio[733]: time="2025-01-27 03:32:50.976125978Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:345490fce879a9444a203a96c8a542def5f368d790d64e66730a720aa6445b07,PodSandboxId:4bb83726a4497555b8df34ce95cac5af4426ae1103bd5678d9340264908427bb,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737948472244044164,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-qdlls,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 36da8bf5-7b6a-49e0-89f5-1159b4e9b719,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ac96e389aec69bf8de85f4293b206fcaa911d3c8c997b4d66fa54d98c8996cb,PodSandboxId:d76a0d0cfa3f1bfdf6c3062d8fcd37ef9908126644c2222b7e333c79a7690be4,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737947508999098324,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-cg665,io.kubernetes.pod.namespace: kubernetes-dashboard
,io.kubernetes.pod.uid: e1d58bcf-aec6-41eb-a192-e7a9292a03d0,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5d931d592032e6ab5b13767e5c76aaef0d91115aee777e1c28fd3441735ea1d,PodSandboxId:51ea672c06965746fd52dd3ab8121f8afdb0c30de93273d8fd4f8fe875bce12f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737947487378703331,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod
.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1d711a0-b203-4f87-9b3e-42cc4ec4c4e9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98eefd9696714738ec49e72f7c372e699c234600a40cc9f46749c6d2bc03727e,PodSandboxId:f9a6695e97d4a9f7452e5a1737e968462b3fefedadf552cc25f087f4a105f3f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737947486869315474,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-w79lb,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84895192-c7b3-4307-92ce-e5a874d1c151,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1912ed616e990b24edb951b1764e6f39aad3b82f256a1c6e367f6d980e75de8,PodSandboxId:a4bd7849a6704569fe8f8c3fabbe3b358b123e817f0200135990517bac8a2eb3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f
91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737947486749580354,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-bhmkn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3df7494-99db-492d-852f-b3019a4a5f59,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d3b1d7510241468091ce1f4a1fb7123365b444c0daa20f5297e38556bf727fa,PodSandboxId:a2f3e78379ae3bcadd4b852fa73219e16ee37f0a0ca3e1c9f371f7ccb996dab2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&Im
ageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737947485475853510,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-br56d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de9065a-ccfd-497e-859b-f6a6de73e192,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b572b5fb41f7177df759ea1bce10162f23c624db147e7c3f5b738176c98bc5e,PodSandboxId:4764b4a4f9de8980229d7972d14d572a68daf8f96655266d578ad2db306aa702,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d
956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737947475033793386,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-150897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc8ba94d0449f6a668de4cde66bb660e,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdd39d11bd613b6ed9096b29f25ce04b1e36410fcca60670738c3fa9c1397ab2,PodSandboxId:6abd6ef262065fb26eb1a92085fa05e3cd17a69bd2e04f2aae250d9f3f27529d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe55563945538
74934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737947475113107549,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-150897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c57d675b4c1bb7464691ce70d6c5eeac,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aee5849dc291c74abbc0a13d73a2eb55cdf4d75996245def05789f3a0a6d1449,PodSandboxId:285f9c9c1c92e3003e2074ab482644dd7f82522f3d2478d3bbf2cb17614ee88e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4b
f959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737947475054963448,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-150897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 969d4c94c184da0048db81e4a0aa05b6,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:011d18ded487d53acb42a84e14ec02460835eb9ce73ef1bdb88549b7e7bb70a9,PodSandboxId:0da74c1caf6bc808dcca142dcab0c6d4b4144eadbcf3e94043cb9e0e983111a6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c311
3e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737947475009921379,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-150897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39c24e87a960415b5647b54078ec6fe6,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:072732b659d19163968526f4b6426f5b5441646f989b55ca6539f061649a2b06,PodSandboxId:d3c899c737a4c496f2b3fe71b6f21b06190b7af873c892776b1e3cb3c38c8908,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c0
4427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737947186329278956,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-150897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c57d675b4c1bb7464691ce70d6c5eeac,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8470d7ce-e025-4ba5-9b10-daa73ffc6dff name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	345490fce879a       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           4 minutes ago       Exited              dashboard-metrics-scraper   8                   4bb83726a4497       dashboard-metrics-scraper-86c6bf9756-qdlls
	1ac96e389aec6       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93   21 minutes ago      Running             kubernetes-dashboard        0                   d76a0d0cfa3f1       kubernetes-dashboard-7779f9b69b-cg665
	b5d931d592032       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 minutes ago      Running             storage-provisioner         0                   51ea672c06965       storage-provisioner
	98eefd9696714       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                           21 minutes ago      Running             coredns                     0                   f9a6695e97d4a       coredns-668d6bf9bc-w79lb
	e1912ed616e99       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                           21 minutes ago      Running             coredns                     0                   a4bd7849a6704       coredns-668d6bf9bc-bhmkn
	4d3b1d7510241       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a                                           21 minutes ago      Running             kube-proxy                  0                   a2f3e78379ae3       kube-proxy-br56d
	fdd39d11bd613       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                           21 minutes ago      Running             kube-apiserver              2                   6abd6ef262065       kube-apiserver-default-k8s-diff-port-150897
	aee5849dc291c       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35                                           21 minutes ago      Running             kube-controller-manager     2                   285f9c9c1c92e       kube-controller-manager-default-k8s-diff-port-150897
	6b572b5fb41f7       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                           21 minutes ago      Running             etcd                        2                   4764b4a4f9de8       etcd-default-k8s-diff-port-150897
	011d18ded487d       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1                                           21 minutes ago      Running             kube-scheduler              2                   0da74c1caf6bc       kube-scheduler-default-k8s-diff-port-150897
	072732b659d19       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                           26 minutes ago      Exited              kube-apiserver              1                   d3c899c737a4c       kube-apiserver-default-k8s-diff-port-150897
	
	
	==> coredns [98eefd9696714738ec49e72f7c372e699c234600a40cc9f46749c6d2bc03727e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [e1912ed616e990b24edb951b1764e6f39aad3b82f256a1c6e367f6d980e75de8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-150897
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-150897
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6bb462d349d93b9bf1c5a4f87817e5e9ea11cc95
	                    minikube.k8s.io/name=default-k8s-diff-port-150897
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T03_11_20_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 03:11:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-150897
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 03:32:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 03:30:03 +0000   Mon, 27 Jan 2025 03:11:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 03:30:03 +0000   Mon, 27 Jan 2025 03:11:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 03:30:03 +0000   Mon, 27 Jan 2025 03:11:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 03:30:03 +0000   Mon, 27 Jan 2025 03:11:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.57
	  Hostname:    default-k8s-diff-port-150897
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7aa476287bb7428bbc40d296076e6ecd
	  System UUID:                7aa47628-7bb7-428b-bc40-d296076e6ecd
	  Boot ID:                    d7101ff8-a6e2-489f-ac5d-73879ed8e152
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-bhmkn                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 coredns-668d6bf9bc-w79lb                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-default-k8s-diff-port-150897                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kube-apiserver-default-k8s-diff-port-150897             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-150897    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-br56d                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-default-k8s-diff-port-150897             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-f79f97bbb-t88bf                          100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         21m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        dashboard-metrics-scraper-86c6bf9756-qdlls              0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-cg665                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 21m   kube-proxy       
	  Normal  Starting                 21m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m   kubelet          Node default-k8s-diff-port-150897 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m   kubelet          Node default-k8s-diff-port-150897 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m   kubelet          Node default-k8s-diff-port-150897 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           21m   node-controller  Node default-k8s-diff-port-150897 event: Registered Node default-k8s-diff-port-150897 in Controller
	
	
	==> dmesg <==
	[  +5.057843] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.058967] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.598448] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.305801] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +0.068197] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070750] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +0.204068] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[  +0.159600] systemd-fstab-generator[690]: Ignoring "noauto" option for root device
	[  +0.308713] systemd-fstab-generator[723]: Ignoring "noauto" option for root device
	[  +4.214047] systemd-fstab-generator[816]: Ignoring "noauto" option for root device
	[  +0.067835] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.417163] systemd-fstab-generator[941]: Ignoring "noauto" option for root device
	[  +5.460584] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.711575] kauditd_printk_skb: 54 callbacks suppressed
	[Jan27 03:07] kauditd_printk_skb: 31 callbacks suppressed
	[Jan27 03:11] systemd-fstab-generator[2686]: Ignoring "noauto" option for root device
	[  +0.077200] kauditd_printk_skb: 8 callbacks suppressed
	[  +6.003233] systemd-fstab-generator[3022]: Ignoring "noauto" option for root device
	[  +0.099684] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.363112] systemd-fstab-generator[3166]: Ignoring "noauto" option for root device
	[  +0.041435] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.963093] kauditd_printk_skb: 112 callbacks suppressed
	[ +14.687596] kauditd_printk_skb: 7 callbacks suppressed
	
	
	==> etcd [6b572b5fb41f7177df759ea1bce10162f23c624db147e7c3f5b738176c98bc5e] <==
	{"level":"info","ts":"2025-01-27T03:15:16.523264Z","caller":"traceutil/trace.go:171","msg":"trace[518481715] linearizableReadLoop","detail":"{readStateIndex:836; appliedIndex:835; }","duration":"200.121958ms","start":"2025-01-27T03:15:16.323054Z","end":"2025-01-27T03:15:16.523176Z","steps":["trace[518481715] 'read index received'  (duration: 197.89272ms)","trace[518481715] 'applied index is now lower than readState.Index'  (duration: 2.228656ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T03:15:16.523652Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"200.47706ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T03:15:16.523695Z","caller":"traceutil/trace.go:171","msg":"trace[1089934735] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:776; }","duration":"200.660324ms","start":"2025-01-27T03:15:16.323023Z","end":"2025-01-27T03:15:16.523683Z","steps":["trace[1089934735] 'agreement among raft nodes before linearized reading'  (duration: 200.462562ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T03:15:16.523262Z","caller":"traceutil/trace.go:171","msg":"trace[1575026489] transaction","detail":"{read_only:false; response_revision:776; number_of_response:1; }","duration":"235.841874ms","start":"2025-01-27T03:15:16.287398Z","end":"2025-01-27T03:15:16.523240Z","steps":["trace[1575026489] 'process raft request'  (duration: 233.61103ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T03:16:42.417818Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.9465ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9685998863167373627 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-qdlls.181e6e0a39ebec33\" mod_revision:804 > success:<request_put:<key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-qdlls.181e6e0a39ebec33\" value_size:895 lease:462626826312597817 >> failure:<request_range:<key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-qdlls.181e6e0a39ebec33\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-01-27T03:16:42.418211Z","caller":"traceutil/trace.go:171","msg":"trace[1362773149] linearizableReadLoop","detail":"{readStateIndex:936; appliedIndex:935; }","duration":"146.514101ms","start":"2025-01-27T03:16:42.271671Z","end":"2025-01-27T03:16:42.418185Z","steps":["trace[1362773149] 'read index received'  (duration: 25.935941ms)","trace[1362773149] 'applied index is now lower than readState.Index'  (duration: 120.577001ms)"],"step_count":2}
	{"level":"info","ts":"2025-01-27T03:16:42.418217Z","caller":"traceutil/trace.go:171","msg":"trace[1408374562] transaction","detail":"{read_only:false; response_revision:857; number_of_response:1; }","duration":"183.399547ms","start":"2025-01-27T03:16:42.234795Z","end":"2025-01-27T03:16:42.418194Z","steps":["trace[1408374562] 'process raft request'  (duration: 62.864884ms)","trace[1408374562] 'compare'  (duration: 119.829525ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T03:16:42.418318Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"146.659651ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T03:16:42.419017Z","caller":"traceutil/trace.go:171","msg":"trace[1465094894] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:857; }","duration":"147.384051ms","start":"2025-01-27T03:16:42.271615Z","end":"2025-01-27T03:16:42.418999Z","steps":["trace[1465094894] 'agreement among raft nodes before linearized reading'  (duration: 146.631581ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T03:18:29.943867Z","caller":"traceutil/trace.go:171","msg":"trace[1903561440] linearizableReadLoop","detail":"{readStateIndex:1053; appliedIndex:1052; }","duration":"277.81502ms","start":"2025-01-27T03:18:29.666003Z","end":"2025-01-27T03:18:29.943818Z","steps":["trace[1903561440] 'read index received'  (duration: 277.636759ms)","trace[1903561440] 'applied index is now lower than readState.Index'  (duration: 177.666µs)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T03:18:29.944186Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"278.102223ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T03:18:29.944218Z","caller":"traceutil/trace.go:171","msg":"trace[2030239539] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:952; }","duration":"278.227047ms","start":"2025-01-27T03:18:29.665980Z","end":"2025-01-27T03:18:29.944207Z","steps":["trace[2030239539] 'agreement among raft nodes before linearized reading'  (duration: 278.066786ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T03:18:29.944618Z","caller":"traceutil/trace.go:171","msg":"trace[1464594478] transaction","detail":"{read_only:false; response_revision:952; number_of_response:1; }","duration":"309.534986ms","start":"2025-01-27T03:18:29.635071Z","end":"2025-01-27T03:18:29.944606Z","steps":["trace[1464594478] 'process raft request'  (duration: 308.633109ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T03:18:29.944813Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T03:18:29.635052Z","time spent":"309.621609ms","remote":"127.0.0.1:35130","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:949 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-01-27T03:18:30.205845Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.609162ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T03:18:30.205907Z","caller":"traceutil/trace.go:171","msg":"trace[1583633247] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:952; }","duration":"140.70296ms","start":"2025-01-27T03:18:30.065193Z","end":"2025-01-27T03:18:30.205896Z","steps":["trace[1583633247] 'range keys from in-memory index tree'  (duration: 140.532979ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T03:21:16.226780Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":832}
	{"level":"info","ts":"2025-01-27T03:21:16.254814Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":832,"took":"27.533929ms","hash":882884320,"current-db-size-bytes":2871296,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":2871296,"current-db-size-in-use":"2.9 MB"}
	{"level":"info","ts":"2025-01-27T03:21:16.254881Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":882884320,"revision":832,"compact-revision":-1}
	{"level":"info","ts":"2025-01-27T03:26:16.233706Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1084}
	{"level":"info","ts":"2025-01-27T03:26:16.243629Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1084,"took":"9.588164ms","hash":3843314148,"current-db-size-bytes":2871296,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":1757184,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-01-27T03:26:16.243752Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3843314148,"revision":1084,"compact-revision":832}
	{"level":"info","ts":"2025-01-27T03:31:16.247066Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1335}
	{"level":"info","ts":"2025-01-27T03:31:16.251813Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1335,"took":"4.018602ms","hash":2878215453,"current-db-size-bytes":2871296,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":1777664,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-01-27T03:31:16.251908Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":2878215453,"revision":1335,"compact-revision":1084}
	
	
	==> kernel <==
	 03:32:51 up 26 min,  0 users,  load average: 0.00, 0.08, 0.14
	Linux default-k8s-diff-port-150897 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [072732b659d19163968526f4b6426f5b5441646f989b55ca6539f061649a2b06] <==
	W0127 03:11:11.407112       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 03:11:11.487616       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 03:11:11.487751       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 03:11:11.581730       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 03:11:11.586245       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 03:11:11.843962       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 03:11:11.886218       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 03:11:11.944007       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 03:11:12.044202       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 03:11:12.256953       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 03:11:12.287840       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 03:11:12.291507       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 03:11:12.405722       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 03:11:12.410447       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 03:11:12.444784       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 03:11:12.470035       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 03:11:12.493617       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 03:11:12.571156       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 03:11:12.610118       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 03:11:12.614762       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 03:11:12.626530       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 03:11:12.675784       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 03:11:12.758035       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 03:11:12.780033       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 03:11:12.863939       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [fdd39d11bd613b6ed9096b29f25ce04b1e36410fcca60670738c3fa9c1397ab2] <==
	I0127 03:29:18.693162       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 03:29:18.693259       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 03:31:17.692413       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 03:31:17.692866       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0127 03:31:18.695129       1 handler_proxy.go:99] no RequestInfo found in the context
	W0127 03:31:18.695208       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 03:31:18.695274       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0127 03:31:18.695380       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0127 03:31:18.696436       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 03:31:18.696449       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 03:32:18.697434       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 03:32:18.697507       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0127 03:32:18.697434       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 03:32:18.697568       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0127 03:32:18.698542       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 03:32:18.698579       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [aee5849dc291c74abbc0a13d73a2eb55cdf4d75996245def05789f3a0a6d1449] <==
	I0127 03:27:53.185589       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="154.07µs"
	I0127 03:27:53.241743       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="63.762µs"
	E0127 03:27:54.424950       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 03:27:54.491284       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 03:28:02.182928       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="61.756µs"
	E0127 03:28:24.432197       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 03:28:24.499051       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 03:28:54.437730       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 03:28:54.506777       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 03:29:24.445167       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 03:29:24.514819       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 03:29:54.451373       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 03:29:54.521964       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 03:30:03.313558       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-150897"
	E0127 03:30:24.457227       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 03:30:24.528682       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 03:30:54.463525       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 03:30:54.535984       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 03:31:24.470740       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 03:31:24.544267       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 03:31:54.477365       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 03:31:54.552284       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 03:32:24.483668       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 03:32:24.559778       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 03:32:45.241776       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="294.91µs"
	
	
	==> kube-proxy [4d3b1d7510241468091ce1f4a1fb7123365b444c0daa20f5297e38556bf727fa] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 03:11:25.945249       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 03:11:25.960518       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.50.57"]
	E0127 03:11:25.960594       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 03:11:26.032121       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 03:11:26.032176       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 03:11:26.032202       1 server_linux.go:170] "Using iptables Proxier"
	I0127 03:11:26.036099       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 03:11:26.036417       1 server.go:497] "Version info" version="v1.32.1"
	I0127 03:11:26.036441       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 03:11:26.038222       1 config.go:199] "Starting service config controller"
	I0127 03:11:26.038277       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 03:11:26.038303       1 config.go:105] "Starting endpoint slice config controller"
	I0127 03:11:26.038376       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 03:11:26.039418       1 config.go:329] "Starting node config controller"
	I0127 03:11:26.039445       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 03:11:26.139618       1 shared_informer.go:320] Caches are synced for node config
	I0127 03:11:26.139661       1 shared_informer.go:320] Caches are synced for service config
	I0127 03:11:26.139670       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [011d18ded487d53acb42a84e14ec02460835eb9ce73ef1bdb88549b7e7bb70a9] <==
	W0127 03:11:18.604406       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0127 03:11:18.604470       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 03:11:18.616406       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0127 03:11:18.616459       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 03:11:18.657156       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0127 03:11:18.657203       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 03:11:18.657237       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0127 03:11:18.657247       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 03:11:18.762113       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0127 03:11:18.762168       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 03:11:18.794940       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0127 03:11:18.795006       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 03:11:18.807695       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0127 03:11:18.807755       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 03:11:18.845473       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0127 03:11:18.845522       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 03:11:18.850016       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0127 03:11:18.850065       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0127 03:11:18.876585       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0127 03:11:18.876630       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 03:11:18.967583       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0127 03:11:18.967664       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 03:11:19.071369       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0127 03:11:19.071503       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0127 03:11:21.189654       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 03:32:14 default-k8s-diff-port-150897 kubelet[3029]: I0127 03:32:14.222928    3029 scope.go:117] "RemoveContainer" containerID="345490fce879a9444a203a96c8a542def5f368d790d64e66730a720aa6445b07"
	Jan 27 03:32:14 default-k8s-diff-port-150897 kubelet[3029]: E0127 03:32:14.223070    3029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-qdlls_kubernetes-dashboard(36da8bf5-7b6a-49e0-89f5-1159b4e9b719)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-qdlls" podUID="36da8bf5-7b6a-49e0-89f5-1159b4e9b719"
	Jan 27 03:32:18 default-k8s-diff-port-150897 kubelet[3029]: E0127 03:32:18.223111    3029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-t88bf" podUID="a5199359-da1d-44dc-acbc-54f288f148ce"
	Jan 27 03:32:20 default-k8s-diff-port-150897 kubelet[3029]: E0127 03:32:20.268856    3029 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 27 03:32:20 default-k8s-diff-port-150897 kubelet[3029]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 27 03:32:20 default-k8s-diff-port-150897 kubelet[3029]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 27 03:32:20 default-k8s-diff-port-150897 kubelet[3029]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 27 03:32:20 default-k8s-diff-port-150897 kubelet[3029]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 27 03:32:20 default-k8s-diff-port-150897 kubelet[3029]: E0127 03:32:20.626111    3029 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737948740625783165,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 03:32:20 default-k8s-diff-port-150897 kubelet[3029]: E0127 03:32:20.626383    3029 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737948740625783165,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 03:32:27 default-k8s-diff-port-150897 kubelet[3029]: I0127 03:32:27.222927    3029 scope.go:117] "RemoveContainer" containerID="345490fce879a9444a203a96c8a542def5f368d790d64e66730a720aa6445b07"
	Jan 27 03:32:27 default-k8s-diff-port-150897 kubelet[3029]: E0127 03:32:27.223468    3029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-qdlls_kubernetes-dashboard(36da8bf5-7b6a-49e0-89f5-1159b4e9b719)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-qdlls" podUID="36da8bf5-7b6a-49e0-89f5-1159b4e9b719"
	Jan 27 03:32:30 default-k8s-diff-port-150897 kubelet[3029]: E0127 03:32:30.628672    3029 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737948750628246377,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 03:32:30 default-k8s-diff-port-150897 kubelet[3029]: E0127 03:32:30.628957    3029 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737948750628246377,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 03:32:33 default-k8s-diff-port-150897 kubelet[3029]: E0127 03:32:33.239638    3029 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 27 03:32:33 default-k8s-diff-port-150897 kubelet[3029]: E0127 03:32:33.240168    3029 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 27 03:32:33 default-k8s-diff-port-150897 kubelet[3029]: E0127 03:32:33.241075    3029 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5w2jn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountP
ropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile
:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-f79f97bbb-t88bf_kube-system(a5199359-da1d-44dc-acbc-54f288f148ce): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Jan 27 03:32:33 default-k8s-diff-port-150897 kubelet[3029]: E0127 03:32:33.242434    3029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-t88bf" podUID="a5199359-da1d-44dc-acbc-54f288f148ce"
	Jan 27 03:32:40 default-k8s-diff-port-150897 kubelet[3029]: E0127 03:32:40.630475    3029 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737948760630076032,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 03:32:40 default-k8s-diff-port-150897 kubelet[3029]: E0127 03:32:40.630938    3029 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737948760630076032,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 03:32:41 default-k8s-diff-port-150897 kubelet[3029]: I0127 03:32:41.222426    3029 scope.go:117] "RemoveContainer" containerID="345490fce879a9444a203a96c8a542def5f368d790d64e66730a720aa6445b07"
	Jan 27 03:32:41 default-k8s-diff-port-150897 kubelet[3029]: E0127 03:32:41.222695    3029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-qdlls_kubernetes-dashboard(36da8bf5-7b6a-49e0-89f5-1159b4e9b719)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-qdlls" podUID="36da8bf5-7b6a-49e0-89f5-1159b4e9b719"
	Jan 27 03:32:45 default-k8s-diff-port-150897 kubelet[3029]: E0127 03:32:45.227259    3029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-t88bf" podUID="a5199359-da1d-44dc-acbc-54f288f148ce"
	Jan 27 03:32:50 default-k8s-diff-port-150897 kubelet[3029]: E0127 03:32:50.632838    3029 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737948770632563846,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 03:32:50 default-k8s-diff-port-150897 kubelet[3029]: E0127 03:32:50.632864    3029 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737948770632563846,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> kubernetes-dashboard [1ac96e389aec69bf8de85f4293b206fcaa911d3c8c997b4d66fa54d98c8996cb] <==
	2025/01/27 03:20:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:21:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:21:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:22:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:22:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:23:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:23:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:24:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:24:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:25:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:25:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:26:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:26:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:27:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:27:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:28:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:28:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:29:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:29:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:30:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:30:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:31:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:31:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:32:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 03:32:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [b5d931d592032e6ab5b13767e5c76aaef0d91115aee777e1c28fd3441735ea1d] <==
	I0127 03:11:27.560766       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0127 03:11:27.571849       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0127 03:11:27.571904       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0127 03:11:27.585072       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0127 03:11:27.585299       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-150897_6428e9a4-df5b-4a08-a9c6-7572cbf42274!
	I0127 03:11:27.585375       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"eb26e288-37ca-4740-b44c-07796b710645", APIVersion:"v1", ResourceVersion:"397", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-150897_6428e9a4-df5b-4a08-a9c6-7572cbf42274 became leader
	I0127 03:11:27.686249       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-150897_6428e9a4-df5b-4a08-a9c6-7572cbf42274!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-150897 -n default-k8s-diff-port-150897
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-150897 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-t88bf
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-150897 describe pod metrics-server-f79f97bbb-t88bf
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-150897 describe pod metrics-server-f79f97bbb-t88bf: exit status 1 (63.9903ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-f79f97bbb-t88bf" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-150897 describe pod metrics-server-f79f97bbb-t88bf: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (1615.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:09:48.567398  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/functional-308251/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:11:11.638237  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/functional-308251/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:11:33.566447  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:13:51.788852  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/auto-284111/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:13:51.795446  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/auto-284111/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:13:51.806884  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/auto-284111/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:13:51.828433  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/auto-284111/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:13:51.869904  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/auto-284111/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:13:51.951429  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/auto-284111/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:13:52.113039  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/auto-284111/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:13:52.434794  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/auto-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:13:53.076551  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/auto-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:13:54.358382  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/auto-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:13:56.920192  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/auto-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:14:02.042201  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/auto-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:14:12.283881  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/auto-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:14:32.766004  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/auto-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:14:48.567022  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/functional-308251/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:15:13.727550  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/auto-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:15:38.319993  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/kindnet-284111/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:15:38.326453  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/kindnet-284111/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:15:38.337935  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/kindnet-284111/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:15:38.359683  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/kindnet-284111/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:15:38.401260  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/kindnet-284111/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:15:38.482842  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/kindnet-284111/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:15:38.644163  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/kindnet-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:15:38.965603  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/kindnet-284111/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:15:39.607665  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/kindnet-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:15:40.889824  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/kindnet-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:15:43.452074  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/kindnet-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:15:48.574216  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/kindnet-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:15:58.816279  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/kindnet-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:16:19.298379  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/kindnet-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:16:33.566824  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:16:35.649841  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/auto-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:17:00.260073  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/kindnet-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
start_stop_delete_test.go:272: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-542356 -n old-k8s-version-542356
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-542356 -n old-k8s-version-542356: exit status 2 (243.264419ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "old-k8s-version-542356" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-542356 -n old-k8s-version-542356
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-542356 -n old-k8s-version-542356: exit status 2 (231.619003ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-542356 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-284111                         | enable-default-cni-284111 | jenkins | v1.35.0 | 27 Jan 25 03:16 UTC | 27 Jan 25 03:16 UTC |
	|         | sudo systemctl status kubelet                        |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-284111                         | enable-default-cni-284111 | jenkins | v1.35.0 | 27 Jan 25 03:16 UTC | 27 Jan 25 03:16 UTC |
	|         | sudo systemctl cat kubelet                           |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-284111                         | enable-default-cni-284111 | jenkins | v1.35.0 | 27 Jan 25 03:16 UTC | 27 Jan 25 03:16 UTC |
	|         | sudo journalctl -xeu kubelet                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-284111                         | enable-default-cni-284111 | jenkins | v1.35.0 | 27 Jan 25 03:16 UTC | 27 Jan 25 03:16 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-284111                         | enable-default-cni-284111 | jenkins | v1.35.0 | 27 Jan 25 03:16 UTC | 27 Jan 25 03:16 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-284111                         | enable-default-cni-284111 | jenkins | v1.35.0 | 27 Jan 25 03:16 UTC |                     |
	|         | sudo systemctl status docker                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-284111                         | enable-default-cni-284111 | jenkins | v1.35.0 | 27 Jan 25 03:16 UTC | 27 Jan 25 03:16 UTC |
	|         | sudo systemctl cat docker                            |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-284111                         | enable-default-cni-284111 | jenkins | v1.35.0 | 27 Jan 25 03:16 UTC | 27 Jan 25 03:16 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-284111                         | enable-default-cni-284111 | jenkins | v1.35.0 | 27 Jan 25 03:16 UTC |                     |
	|         | sudo docker system info                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-284111                         | enable-default-cni-284111 | jenkins | v1.35.0 | 27 Jan 25 03:16 UTC |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | cri-docker --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-284111                         | enable-default-cni-284111 | jenkins | v1.35.0 | 27 Jan 25 03:16 UTC | 27 Jan 25 03:16 UTC |
	|         | sudo systemctl cat cri-docker                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-284111 sudo cat                | enable-default-cni-284111 | jenkins | v1.35.0 | 27 Jan 25 03:16 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-284111 sudo cat                | enable-default-cni-284111 | jenkins | v1.35.0 | 27 Jan 25 03:16 UTC | 27 Jan 25 03:16 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-284111                         | enable-default-cni-284111 | jenkins | v1.35.0 | 27 Jan 25 03:16 UTC | 27 Jan 25 03:16 UTC |
	|         | sudo cri-dockerd --version                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-284111                         | enable-default-cni-284111 | jenkins | v1.35.0 | 27 Jan 25 03:16 UTC |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | containerd --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-284111                         | enable-default-cni-284111 | jenkins | v1.35.0 | 27 Jan 25 03:16 UTC | 27 Jan 25 03:16 UTC |
	|         | sudo systemctl cat containerd                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-284111 sudo cat                | enable-default-cni-284111 | jenkins | v1.35.0 | 27 Jan 25 03:16 UTC | 27 Jan 25 03:16 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-284111                         | enable-default-cni-284111 | jenkins | v1.35.0 | 27 Jan 25 03:16 UTC | 27 Jan 25 03:16 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-284111                         | enable-default-cni-284111 | jenkins | v1.35.0 | 27 Jan 25 03:16 UTC | 27 Jan 25 03:16 UTC |
	|         | sudo containerd config dump                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-284111                         | enable-default-cni-284111 | jenkins | v1.35.0 | 27 Jan 25 03:16 UTC | 27 Jan 25 03:16 UTC |
	|         | sudo systemctl status crio                           |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-284111                         | enable-default-cni-284111 | jenkins | v1.35.0 | 27 Jan 25 03:16 UTC | 27 Jan 25 03:16 UTC |
	|         | sudo systemctl cat crio                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-284111                         | enable-default-cni-284111 | jenkins | v1.35.0 | 27 Jan 25 03:16 UTC | 27 Jan 25 03:16 UTC |
	|         | sudo find /etc/crio -type f                          |                           |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                        |                           |         |         |                     |                     |
	|         | \;                                                   |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-284111                         | enable-default-cni-284111 | jenkins | v1.35.0 | 27 Jan 25 03:16 UTC | 27 Jan 25 03:16 UTC |
	|         | sudo crio config                                     |                           |         |         |                     |                     |
	| delete  | -p enable-default-cni-284111                         | enable-default-cni-284111 | jenkins | v1.35.0 | 27 Jan 25 03:16 UTC | 27 Jan 25 03:16 UTC |
	| start   | -p flannel-284111                                    | flannel-284111            | jenkins | v1.35.0 | 27 Jan 25 03:16 UTC | 27 Jan 25 03:17 UTC |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --cni=flannel --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 03:16:08
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 03:16:08.162975  962802 out.go:345] Setting OutFile to fd 1 ...
	I0127 03:16:08.163315  962802 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 03:16:08.163327  962802 out.go:358] Setting ErrFile to fd 2...
	I0127 03:16:08.163332  962802 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 03:16:08.163536  962802 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-897624/.minikube/bin
	I0127 03:16:08.164196  962802 out.go:352] Setting JSON to false
	I0127 03:16:08.165576  962802 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":14311,"bootTime":1737933457,"procs":306,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 03:16:08.165706  962802 start.go:139] virtualization: kvm guest
	I0127 03:16:08.167778  962802 out.go:177] * [flannel-284111] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 03:16:08.168984  962802 notify.go:220] Checking for updates...
	I0127 03:16:08.169027  962802 out.go:177]   - MINIKUBE_LOCATION=20316
	I0127 03:16:08.170347  962802 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 03:16:08.171696  962802 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20316-897624/kubeconfig
	I0127 03:16:08.173012  962802 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-897624/.minikube
	I0127 03:16:08.174217  962802 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 03:16:08.175321  962802 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 03:16:08.176837  962802 config.go:182] Loaded profile config "default-k8s-diff-port-150897": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 03:16:08.177019  962802 config.go:182] Loaded profile config "no-preload-844432": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 03:16:08.177105  962802 config.go:182] Loaded profile config "old-k8s-version-542356": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 03:16:08.177205  962802 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 03:16:08.217428  962802 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 03:16:08.218669  962802 start.go:297] selected driver: kvm2
	I0127 03:16:08.218688  962802 start.go:901] validating driver "kvm2" against <nil>
	I0127 03:16:08.218699  962802 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 03:16:08.219434  962802 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 03:16:08.219558  962802 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20316-897624/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 03:16:08.236560  962802 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 03:16:08.236654  962802 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 03:16:08.237014  962802 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 03:16:08.237087  962802 cni.go:84] Creating CNI manager for "flannel"
	I0127 03:16:08.237104  962802 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0127 03:16:08.237200  962802 start.go:340] cluster config:
	{Name:flannel-284111 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:flannel-284111 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0127 03:16:08.237331  962802 iso.go:125] acquiring lock: {Name:mkbbe08fa3daa2372045069667a0a52e0e34abd9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 03:16:08.239082  962802 out.go:177] * Starting "flannel-284111" primary control-plane node in "flannel-284111" cluster
	I0127 03:16:08.240242  962802 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 03:16:08.240299  962802 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 03:16:08.240315  962802 cache.go:56] Caching tarball of preloaded images
	I0127 03:16:08.240457  962802 preload.go:172] Found /home/jenkins/minikube-integration/20316-897624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 03:16:08.240469  962802 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 03:16:08.240611  962802 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/flannel-284111/config.json ...
	I0127 03:16:08.240638  962802 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/flannel-284111/config.json: {Name:mkf4e58c1b3951048ca2e3b6b18fd358901eb70e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:16:08.244164  962802 start.go:360] acquireMachinesLock for flannel-284111: {Name:mke71211974c740845fb9b00041df14f4e3ecd74 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 03:16:08.244215  962802 start.go:364] duration metric: took 26.978µs to acquireMachinesLock for "flannel-284111"
	I0127 03:16:08.244241  962802 start.go:93] Provisioning new machine with config: &{Name:flannel-284111 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:flannel-284111 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 03:16:08.244329  962802 start.go:125] createHost starting for "" (driver="kvm2")
	I0127 03:16:08.246430  962802 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0127 03:16:08.246593  962802 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:16:08.246651  962802 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:16:08.263425  962802 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35677
	I0127 03:16:08.264104  962802 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:16:08.264825  962802 main.go:141] libmachine: Using API Version  1
	I0127 03:16:08.264852  962802 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:16:08.265367  962802 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:16:08.265614  962802 main.go:141] libmachine: (flannel-284111) Calling .GetMachineName
	I0127 03:16:08.265804  962802 main.go:141] libmachine: (flannel-284111) Calling .DriverName
	I0127 03:16:08.266047  962802 start.go:159] libmachine.API.Create for "flannel-284111" (driver="kvm2")
	I0127 03:16:08.266092  962802 client.go:168] LocalClient.Create starting
	I0127 03:16:08.266142  962802 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem
	I0127 03:16:08.266200  962802 main.go:141] libmachine: Decoding PEM data...
	I0127 03:16:08.266225  962802 main.go:141] libmachine: Parsing certificate...
	I0127 03:16:08.266300  962802 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20316-897624/.minikube/certs/cert.pem
	I0127 03:16:08.266329  962802 main.go:141] libmachine: Decoding PEM data...
	I0127 03:16:08.266350  962802 main.go:141] libmachine: Parsing certificate...
	I0127 03:16:08.266384  962802 main.go:141] libmachine: Running pre-create checks...
	I0127 03:16:08.266398  962802 main.go:141] libmachine: (flannel-284111) Calling .PreCreateCheck
	I0127 03:16:08.266860  962802 main.go:141] libmachine: (flannel-284111) Calling .GetConfigRaw
	I0127 03:16:08.267330  962802 main.go:141] libmachine: Creating machine...
	I0127 03:16:08.267344  962802 main.go:141] libmachine: (flannel-284111) Calling .Create
	I0127 03:16:08.267494  962802 main.go:141] libmachine: (flannel-284111) creating KVM machine...
	I0127 03:16:08.267513  962802 main.go:141] libmachine: (flannel-284111) creating network...
	I0127 03:16:08.268817  962802 main.go:141] libmachine: (flannel-284111) DBG | found existing default KVM network
	I0127 03:16:08.270434  962802 main.go:141] libmachine: (flannel-284111) DBG | I0127 03:16:08.270251  962825 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:9e:80:59} reservation:<nil>}
	I0127 03:16:08.271840  962802 main.go:141] libmachine: (flannel-284111) DBG | I0127 03:16:08.271755  962825 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:27:c5:54} reservation:<nil>}
	I0127 03:16:08.273132  962802 main.go:141] libmachine: (flannel-284111) DBG | I0127 03:16:08.273043  962825 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000310960}
	I0127 03:16:08.273176  962802 main.go:141] libmachine: (flannel-284111) DBG | created network xml: 
	I0127 03:16:08.273196  962802 main.go:141] libmachine: (flannel-284111) DBG | <network>
	I0127 03:16:08.273222  962802 main.go:141] libmachine: (flannel-284111) DBG |   <name>mk-flannel-284111</name>
	I0127 03:16:08.273244  962802 main.go:141] libmachine: (flannel-284111) DBG |   <dns enable='no'/>
	I0127 03:16:08.273256  962802 main.go:141] libmachine: (flannel-284111) DBG |   
	I0127 03:16:08.273267  962802 main.go:141] libmachine: (flannel-284111) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0127 03:16:08.273281  962802 main.go:141] libmachine: (flannel-284111) DBG |     <dhcp>
	I0127 03:16:08.273294  962802 main.go:141] libmachine: (flannel-284111) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0127 03:16:08.273303  962802 main.go:141] libmachine: (flannel-284111) DBG |     </dhcp>
	I0127 03:16:08.273310  962802 main.go:141] libmachine: (flannel-284111) DBG |   </ip>
	I0127 03:16:08.273318  962802 main.go:141] libmachine: (flannel-284111) DBG |   
	I0127 03:16:08.273324  962802 main.go:141] libmachine: (flannel-284111) DBG | </network>
	I0127 03:16:08.273330  962802 main.go:141] libmachine: (flannel-284111) DBG | 
	I0127 03:16:08.278594  962802 main.go:141] libmachine: (flannel-284111) DBG | trying to create private KVM network mk-flannel-284111 192.168.61.0/24...
	I0127 03:16:08.359421  962802 main.go:141] libmachine: (flannel-284111) DBG | private KVM network mk-flannel-284111 192.168.61.0/24 created
	I0127 03:16:08.359456  962802 main.go:141] libmachine: (flannel-284111) DBG | I0127 03:16:08.359396  962825 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20316-897624/.minikube
	I0127 03:16:08.359469  962802 main.go:141] libmachine: (flannel-284111) setting up store path in /home/jenkins/minikube-integration/20316-897624/.minikube/machines/flannel-284111 ...
	I0127 03:16:08.359482  962802 main.go:141] libmachine: (flannel-284111) building disk image from file:///home/jenkins/minikube-integration/20316-897624/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 03:16:08.359595  962802 main.go:141] libmachine: (flannel-284111) Downloading /home/jenkins/minikube-integration/20316-897624/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20316-897624/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0127 03:16:08.666397  962802 main.go:141] libmachine: (flannel-284111) DBG | I0127 03:16:08.666231  962825 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/flannel-284111/id_rsa...
	I0127 03:16:08.915850  962802 main.go:141] libmachine: (flannel-284111) DBG | I0127 03:16:08.915696  962825 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/flannel-284111/flannel-284111.rawdisk...
	I0127 03:16:08.915888  962802 main.go:141] libmachine: (flannel-284111) DBG | Writing magic tar header
	I0127 03:16:08.915909  962802 main.go:141] libmachine: (flannel-284111) DBG | Writing SSH key tar header
	I0127 03:16:08.915929  962802 main.go:141] libmachine: (flannel-284111) DBG | I0127 03:16:08.915877  962825 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20316-897624/.minikube/machines/flannel-284111 ...
	I0127 03:16:08.916063  962802 main.go:141] libmachine: (flannel-284111) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/flannel-284111
	I0127 03:16:08.916093  962802 main.go:141] libmachine: (flannel-284111) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20316-897624/.minikube/machines
	I0127 03:16:08.916109  962802 main.go:141] libmachine: (flannel-284111) setting executable bit set on /home/jenkins/minikube-integration/20316-897624/.minikube/machines/flannel-284111 (perms=drwx------)
	I0127 03:16:08.916122  962802 main.go:141] libmachine: (flannel-284111) setting executable bit set on /home/jenkins/minikube-integration/20316-897624/.minikube/machines (perms=drwxr-xr-x)
	I0127 03:16:08.916133  962802 main.go:141] libmachine: (flannel-284111) setting executable bit set on /home/jenkins/minikube-integration/20316-897624/.minikube (perms=drwxr-xr-x)
	I0127 03:16:08.916148  962802 main.go:141] libmachine: (flannel-284111) setting executable bit set on /home/jenkins/minikube-integration/20316-897624 (perms=drwxrwxr-x)
	I0127 03:16:08.916184  962802 main.go:141] libmachine: (flannel-284111) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0127 03:16:08.916200  962802 main.go:141] libmachine: (flannel-284111) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20316-897624/.minikube
	I0127 03:16:08.916218  962802 main.go:141] libmachine: (flannel-284111) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20316-897624
	I0127 03:16:08.916227  962802 main.go:141] libmachine: (flannel-284111) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0127 03:16:08.916235  962802 main.go:141] libmachine: (flannel-284111) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0127 03:16:08.916247  962802 main.go:141] libmachine: (flannel-284111) DBG | checking permissions on dir: /home/jenkins
	I0127 03:16:08.916255  962802 main.go:141] libmachine: (flannel-284111) creating domain...
	I0127 03:16:08.916271  962802 main.go:141] libmachine: (flannel-284111) DBG | checking permissions on dir: /home
	I0127 03:16:08.916283  962802 main.go:141] libmachine: (flannel-284111) DBG | skipping /home - not owner
	I0127 03:16:08.917694  962802 main.go:141] libmachine: (flannel-284111) define libvirt domain using xml: 
	I0127 03:16:08.917714  962802 main.go:141] libmachine: (flannel-284111) <domain type='kvm'>
	I0127 03:16:08.917733  962802 main.go:141] libmachine: (flannel-284111)   <name>flannel-284111</name>
	I0127 03:16:08.917744  962802 main.go:141] libmachine: (flannel-284111)   <memory unit='MiB'>3072</memory>
	I0127 03:16:08.917752  962802 main.go:141] libmachine: (flannel-284111)   <vcpu>2</vcpu>
	I0127 03:16:08.917756  962802 main.go:141] libmachine: (flannel-284111)   <features>
	I0127 03:16:08.917761  962802 main.go:141] libmachine: (flannel-284111)     <acpi/>
	I0127 03:16:08.917769  962802 main.go:141] libmachine: (flannel-284111)     <apic/>
	I0127 03:16:08.917780  962802 main.go:141] libmachine: (flannel-284111)     <pae/>
	I0127 03:16:08.917784  962802 main.go:141] libmachine: (flannel-284111)     
	I0127 03:16:08.917790  962802 main.go:141] libmachine: (flannel-284111)   </features>
	I0127 03:16:08.917797  962802 main.go:141] libmachine: (flannel-284111)   <cpu mode='host-passthrough'>
	I0127 03:16:08.917801  962802 main.go:141] libmachine: (flannel-284111)   
	I0127 03:16:08.917808  962802 main.go:141] libmachine: (flannel-284111)   </cpu>
	I0127 03:16:08.917813  962802 main.go:141] libmachine: (flannel-284111)   <os>
	I0127 03:16:08.917822  962802 main.go:141] libmachine: (flannel-284111)     <type>hvm</type>
	I0127 03:16:08.917829  962802 main.go:141] libmachine: (flannel-284111)     <boot dev='cdrom'/>
	I0127 03:16:08.917834  962802 main.go:141] libmachine: (flannel-284111)     <boot dev='hd'/>
	I0127 03:16:08.917839  962802 main.go:141] libmachine: (flannel-284111)     <bootmenu enable='no'/>
	I0127 03:16:08.917842  962802 main.go:141] libmachine: (flannel-284111)   </os>
	I0127 03:16:08.917882  962802 main.go:141] libmachine: (flannel-284111)   <devices>
	I0127 03:16:08.917910  962802 main.go:141] libmachine: (flannel-284111)     <disk type='file' device='cdrom'>
	I0127 03:16:08.917926  962802 main.go:141] libmachine: (flannel-284111)       <source file='/home/jenkins/minikube-integration/20316-897624/.minikube/machines/flannel-284111/boot2docker.iso'/>
	I0127 03:16:08.917937  962802 main.go:141] libmachine: (flannel-284111)       <target dev='hdc' bus='scsi'/>
	I0127 03:16:08.917945  962802 main.go:141] libmachine: (flannel-284111)       <readonly/>
	I0127 03:16:08.917952  962802 main.go:141] libmachine: (flannel-284111)     </disk>
	I0127 03:16:08.917962  962802 main.go:141] libmachine: (flannel-284111)     <disk type='file' device='disk'>
	I0127 03:16:08.917975  962802 main.go:141] libmachine: (flannel-284111)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0127 03:16:08.917990  962802 main.go:141] libmachine: (flannel-284111)       <source file='/home/jenkins/minikube-integration/20316-897624/.minikube/machines/flannel-284111/flannel-284111.rawdisk'/>
	I0127 03:16:08.918003  962802 main.go:141] libmachine: (flannel-284111)       <target dev='hda' bus='virtio'/>
	I0127 03:16:08.918013  962802 main.go:141] libmachine: (flannel-284111)     </disk>
	I0127 03:16:08.918022  962802 main.go:141] libmachine: (flannel-284111)     <interface type='network'>
	I0127 03:16:08.918036  962802 main.go:141] libmachine: (flannel-284111)       <source network='mk-flannel-284111'/>
	I0127 03:16:08.918045  962802 main.go:141] libmachine: (flannel-284111)       <model type='virtio'/>
	I0127 03:16:08.918069  962802 main.go:141] libmachine: (flannel-284111)     </interface>
	I0127 03:16:08.918087  962802 main.go:141] libmachine: (flannel-284111)     <interface type='network'>
	I0127 03:16:08.918099  962802 main.go:141] libmachine: (flannel-284111)       <source network='default'/>
	I0127 03:16:08.918109  962802 main.go:141] libmachine: (flannel-284111)       <model type='virtio'/>
	I0127 03:16:08.918127  962802 main.go:141] libmachine: (flannel-284111)     </interface>
	I0127 03:16:08.918136  962802 main.go:141] libmachine: (flannel-284111)     <serial type='pty'>
	I0127 03:16:08.918143  962802 main.go:141] libmachine: (flannel-284111)       <target port='0'/>
	I0127 03:16:08.918151  962802 main.go:141] libmachine: (flannel-284111)     </serial>
	I0127 03:16:08.918162  962802 main.go:141] libmachine: (flannel-284111)     <console type='pty'>
	I0127 03:16:08.918171  962802 main.go:141] libmachine: (flannel-284111)       <target type='serial' port='0'/>
	I0127 03:16:08.918180  962802 main.go:141] libmachine: (flannel-284111)     </console>
	I0127 03:16:08.918188  962802 main.go:141] libmachine: (flannel-284111)     <rng model='virtio'>
	I0127 03:16:08.918199  962802 main.go:141] libmachine: (flannel-284111)       <backend model='random'>/dev/random</backend>
	I0127 03:16:08.918210  962802 main.go:141] libmachine: (flannel-284111)     </rng>
	I0127 03:16:08.918223  962802 main.go:141] libmachine: (flannel-284111)     
	I0127 03:16:08.918229  962802 main.go:141] libmachine: (flannel-284111)     
	I0127 03:16:08.918238  962802 main.go:141] libmachine: (flannel-284111)   </devices>
	I0127 03:16:08.918243  962802 main.go:141] libmachine: (flannel-284111) </domain>
	I0127 03:16:08.918248  962802 main.go:141] libmachine: (flannel-284111) 
	I0127 03:16:08.922400  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined MAC address 52:54:00:77:b8:4c in network default
	I0127 03:16:08.922952  962802 main.go:141] libmachine: (flannel-284111) starting domain...
	I0127 03:16:08.922975  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:08.922983  962802 main.go:141] libmachine: (flannel-284111) ensuring networks are active...
	I0127 03:16:08.923636  962802 main.go:141] libmachine: (flannel-284111) Ensuring network default is active
	I0127 03:16:08.923923  962802 main.go:141] libmachine: (flannel-284111) Ensuring network mk-flannel-284111 is active
	I0127 03:16:08.924484  962802 main.go:141] libmachine: (flannel-284111) getting domain XML...
	I0127 03:16:08.925358  962802 main.go:141] libmachine: (flannel-284111) creating domain...
	I0127 03:16:10.208751  962802 main.go:141] libmachine: (flannel-284111) waiting for IP...
	I0127 03:16:10.209761  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:10.210226  962802 main.go:141] libmachine: (flannel-284111) DBG | unable to find current IP address of domain flannel-284111 in network mk-flannel-284111
	I0127 03:16:10.210296  962802 main.go:141] libmachine: (flannel-284111) DBG | I0127 03:16:10.210225  962825 retry.go:31] will retry after 224.403383ms: waiting for domain to come up
	I0127 03:16:10.436838  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:10.437442  962802 main.go:141] libmachine: (flannel-284111) DBG | unable to find current IP address of domain flannel-284111 in network mk-flannel-284111
	I0127 03:16:10.437474  962802 main.go:141] libmachine: (flannel-284111) DBG | I0127 03:16:10.437408  962825 retry.go:31] will retry after 263.754721ms: waiting for domain to come up
	I0127 03:16:10.703861  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:10.704491  962802 main.go:141] libmachine: (flannel-284111) DBG | unable to find current IP address of domain flannel-284111 in network mk-flannel-284111
	I0127 03:16:10.704547  962802 main.go:141] libmachine: (flannel-284111) DBG | I0127 03:16:10.704463  962825 retry.go:31] will retry after 471.384229ms: waiting for domain to come up
	I0127 03:16:11.178277  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:11.178898  962802 main.go:141] libmachine: (flannel-284111) DBG | unable to find current IP address of domain flannel-284111 in network mk-flannel-284111
	I0127 03:16:11.178931  962802 main.go:141] libmachine: (flannel-284111) DBG | I0127 03:16:11.178853  962825 retry.go:31] will retry after 415.04155ms: waiting for domain to come up
	I0127 03:16:11.595519  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:11.596047  962802 main.go:141] libmachine: (flannel-284111) DBG | unable to find current IP address of domain flannel-284111 in network mk-flannel-284111
	I0127 03:16:11.596095  962802 main.go:141] libmachine: (flannel-284111) DBG | I0127 03:16:11.596021  962825 retry.go:31] will retry after 534.847017ms: waiting for domain to come up
	I0127 03:16:12.132811  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:12.133455  962802 main.go:141] libmachine: (flannel-284111) DBG | unable to find current IP address of domain flannel-284111 in network mk-flannel-284111
	I0127 03:16:12.133482  962802 main.go:141] libmachine: (flannel-284111) DBG | I0127 03:16:12.133420  962825 retry.go:31] will retry after 921.135142ms: waiting for domain to come up
	I0127 03:16:13.056548  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:13.057193  962802 main.go:141] libmachine: (flannel-284111) DBG | unable to find current IP address of domain flannel-284111 in network mk-flannel-284111
	I0127 03:16:13.057247  962802 main.go:141] libmachine: (flannel-284111) DBG | I0127 03:16:13.057163  962825 retry.go:31] will retry after 801.303439ms: waiting for domain to come up
	I0127 03:16:13.859656  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:13.860172  962802 main.go:141] libmachine: (flannel-284111) DBG | unable to find current IP address of domain flannel-284111 in network mk-flannel-284111
	I0127 03:16:13.860224  962802 main.go:141] libmachine: (flannel-284111) DBG | I0127 03:16:13.860154  962825 retry.go:31] will retry after 1.276766814s: waiting for domain to come up
	I0127 03:16:15.138131  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:15.138825  962802 main.go:141] libmachine: (flannel-284111) DBG | unable to find current IP address of domain flannel-284111 in network mk-flannel-284111
	I0127 03:16:15.138921  962802 main.go:141] libmachine: (flannel-284111) DBG | I0127 03:16:15.138824  962825 retry.go:31] will retry after 1.736456656s: waiting for domain to come up
	I0127 03:16:16.876812  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:16.877421  962802 main.go:141] libmachine: (flannel-284111) DBG | unable to find current IP address of domain flannel-284111 in network mk-flannel-284111
	I0127 03:16:16.877447  962802 main.go:141] libmachine: (flannel-284111) DBG | I0127 03:16:16.877387  962825 retry.go:31] will retry after 2.242956851s: waiting for domain to come up
	I0127 03:16:19.122320  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:19.122859  962802 main.go:141] libmachine: (flannel-284111) DBG | unable to find current IP address of domain flannel-284111 in network mk-flannel-284111
	I0127 03:16:19.122892  962802 main.go:141] libmachine: (flannel-284111) DBG | I0127 03:16:19.122807  962825 retry.go:31] will retry after 2.309605301s: waiting for domain to come up
	I0127 03:16:21.434555  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:21.435098  962802 main.go:141] libmachine: (flannel-284111) DBG | unable to find current IP address of domain flannel-284111 in network mk-flannel-284111
	I0127 03:16:21.435153  962802 main.go:141] libmachine: (flannel-284111) DBG | I0127 03:16:21.435074  962825 retry.go:31] will retry after 2.673843099s: waiting for domain to come up
	I0127 03:16:24.110168  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:24.110684  962802 main.go:141] libmachine: (flannel-284111) DBG | unable to find current IP address of domain flannel-284111 in network mk-flannel-284111
	I0127 03:16:24.110714  962802 main.go:141] libmachine: (flannel-284111) DBG | I0127 03:16:24.110664  962825 retry.go:31] will retry after 3.662976414s: waiting for domain to come up
	I0127 03:16:27.777534  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:27.778143  962802 main.go:141] libmachine: (flannel-284111) DBG | unable to find current IP address of domain flannel-284111 in network mk-flannel-284111
	I0127 03:16:27.778225  962802 main.go:141] libmachine: (flannel-284111) DBG | I0127 03:16:27.778116  962825 retry.go:31] will retry after 3.877857416s: waiting for domain to come up
	I0127 03:16:31.658024  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:31.658578  962802 main.go:141] libmachine: (flannel-284111) found domain IP: 192.168.61.149
	I0127 03:16:31.658601  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has current primary IP address 192.168.61.149 and MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:31.658632  962802 main.go:141] libmachine: (flannel-284111) reserving static IP address...
	I0127 03:16:31.658927  962802 main.go:141] libmachine: (flannel-284111) DBG | unable to find host DHCP lease matching {name: "flannel-284111", mac: "52:54:00:70:fd:ba", ip: "192.168.61.149"} in network mk-flannel-284111
	I0127 03:16:31.741907  962802 main.go:141] libmachine: (flannel-284111) reserved static IP address 192.168.61.149 for domain flannel-284111
	I0127 03:16:31.741948  962802 main.go:141] libmachine: (flannel-284111) waiting for SSH...
	I0127 03:16:31.741958  962802 main.go:141] libmachine: (flannel-284111) DBG | Getting to WaitForSSH function...
	I0127 03:16:31.744758  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:31.745099  962802 main.go:141] libmachine: (flannel-284111) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:70:fd:ba", ip: ""} in network mk-flannel-284111
	I0127 03:16:31.745133  962802 main.go:141] libmachine: (flannel-284111) DBG | unable to find defined IP address of network mk-flannel-284111 interface with MAC address 52:54:00:70:fd:ba
	I0127 03:16:31.745258  962802 main.go:141] libmachine: (flannel-284111) DBG | Using SSH client type: external
	I0127 03:16:31.745291  962802 main.go:141] libmachine: (flannel-284111) DBG | Using SSH private key: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/flannel-284111/id_rsa (-rw-------)
	I0127 03:16:31.745326  962802 main.go:141] libmachine: (flannel-284111) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20316-897624/.minikube/machines/flannel-284111/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 03:16:31.745340  962802 main.go:141] libmachine: (flannel-284111) DBG | About to run SSH command:
	I0127 03:16:31.745355  962802 main.go:141] libmachine: (flannel-284111) DBG | exit 0
	I0127 03:16:31.749535  962802 main.go:141] libmachine: (flannel-284111) DBG | SSH cmd err, output: exit status 255: 
	I0127 03:16:31.749558  962802 main.go:141] libmachine: (flannel-284111) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0127 03:16:31.749569  962802 main.go:141] libmachine: (flannel-284111) DBG | command : exit 0
	I0127 03:16:31.749580  962802 main.go:141] libmachine: (flannel-284111) DBG | err     : exit status 255
	I0127 03:16:31.749591  962802 main.go:141] libmachine: (flannel-284111) DBG | output  : 
	I0127 03:16:34.750223  962802 main.go:141] libmachine: (flannel-284111) DBG | Getting to WaitForSSH function...
	I0127 03:16:34.753023  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:34.753386  962802 main.go:141] libmachine: (flannel-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:fd:ba", ip: ""} in network mk-flannel-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:16:23 +0000 UTC Type:0 Mac:52:54:00:70:fd:ba Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:flannel-284111 Clientid:01:52:54:00:70:fd:ba}
	I0127 03:16:34.753428  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined IP address 192.168.61.149 and MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:34.753444  962802 main.go:141] libmachine: (flannel-284111) DBG | Using SSH client type: external
	I0127 03:16:34.753459  962802 main.go:141] libmachine: (flannel-284111) DBG | Using SSH private key: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/flannel-284111/id_rsa (-rw-------)
	I0127 03:16:34.753490  962802 main.go:141] libmachine: (flannel-284111) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.149 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20316-897624/.minikube/machines/flannel-284111/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 03:16:34.753506  962802 main.go:141] libmachine: (flannel-284111) DBG | About to run SSH command:
	I0127 03:16:34.753538  962802 main.go:141] libmachine: (flannel-284111) DBG | exit 0
	I0127 03:16:34.885191  962802 main.go:141] libmachine: (flannel-284111) DBG | SSH cmd err, output: <nil>: 
	I0127 03:16:34.885466  962802 main.go:141] libmachine: (flannel-284111) KVM machine creation complete
	I0127 03:16:34.885872  962802 main.go:141] libmachine: (flannel-284111) Calling .GetConfigRaw
	I0127 03:16:34.886564  962802 main.go:141] libmachine: (flannel-284111) Calling .DriverName
	I0127 03:16:34.886832  962802 main.go:141] libmachine: (flannel-284111) Calling .DriverName
	I0127 03:16:34.887022  962802 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0127 03:16:34.887040  962802 main.go:141] libmachine: (flannel-284111) Calling .GetState
	I0127 03:16:34.888386  962802 main.go:141] libmachine: Detecting operating system of created instance...
	I0127 03:16:34.888402  962802 main.go:141] libmachine: Waiting for SSH to be available...
	I0127 03:16:34.888407  962802 main.go:141] libmachine: Getting to WaitForSSH function...
	I0127 03:16:34.888413  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHHostname
	I0127 03:16:34.890958  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:34.891346  962802 main.go:141] libmachine: (flannel-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:fd:ba", ip: ""} in network mk-flannel-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:16:23 +0000 UTC Type:0 Mac:52:54:00:70:fd:ba Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:flannel-284111 Clientid:01:52:54:00:70:fd:ba}
	I0127 03:16:34.891381  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined IP address 192.168.61.149 and MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:34.891500  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHPort
	I0127 03:16:34.891668  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHKeyPath
	I0127 03:16:34.891841  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHKeyPath
	I0127 03:16:34.891997  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHUsername
	I0127 03:16:34.892210  962802 main.go:141] libmachine: Using SSH client type: native
	I0127 03:16:34.892488  962802 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.149 22 <nil> <nil>}
	I0127 03:16:34.892508  962802 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0127 03:16:35.004594  962802 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 03:16:35.004626  962802 main.go:141] libmachine: Detecting the provisioner...
	I0127 03:16:35.004644  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHHostname
	I0127 03:16:35.008200  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:35.008626  962802 main.go:141] libmachine: (flannel-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:fd:ba", ip: ""} in network mk-flannel-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:16:23 +0000 UTC Type:0 Mac:52:54:00:70:fd:ba Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:flannel-284111 Clientid:01:52:54:00:70:fd:ba}
	I0127 03:16:35.008666  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined IP address 192.168.61.149 and MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:35.008894  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHPort
	I0127 03:16:35.009178  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHKeyPath
	I0127 03:16:35.009354  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHKeyPath
	I0127 03:16:35.009527  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHUsername
	I0127 03:16:35.009755  962802 main.go:141] libmachine: Using SSH client type: native
	I0127 03:16:35.009989  962802 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.149 22 <nil> <nil>}
	I0127 03:16:35.010009  962802 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0127 03:16:35.126339  962802 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0127 03:16:35.126481  962802 main.go:141] libmachine: found compatible host: buildroot
	I0127 03:16:35.126500  962802 main.go:141] libmachine: Provisioning with buildroot...
	I0127 03:16:35.126511  962802 main.go:141] libmachine: (flannel-284111) Calling .GetMachineName
	I0127 03:16:35.126813  962802 buildroot.go:166] provisioning hostname "flannel-284111"
	I0127 03:16:35.126898  962802 main.go:141] libmachine: (flannel-284111) Calling .GetMachineName
	I0127 03:16:35.127163  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHHostname
	I0127 03:16:35.130162  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:35.130585  962802 main.go:141] libmachine: (flannel-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:fd:ba", ip: ""} in network mk-flannel-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:16:23 +0000 UTC Type:0 Mac:52:54:00:70:fd:ba Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:flannel-284111 Clientid:01:52:54:00:70:fd:ba}
	I0127 03:16:35.130630  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined IP address 192.168.61.149 and MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:35.130735  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHPort
	I0127 03:16:35.130931  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHKeyPath
	I0127 03:16:35.131060  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHKeyPath
	I0127 03:16:35.131179  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHUsername
	I0127 03:16:35.131313  962802 main.go:141] libmachine: Using SSH client type: native
	I0127 03:16:35.131530  962802 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.149 22 <nil> <nil>}
	I0127 03:16:35.131550  962802 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-284111 && echo "flannel-284111" | sudo tee /etc/hostname
	I0127 03:16:35.259666  962802 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-284111
	
	I0127 03:16:35.259714  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHHostname
	I0127 03:16:35.263314  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:35.263717  962802 main.go:141] libmachine: (flannel-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:fd:ba", ip: ""} in network mk-flannel-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:16:23 +0000 UTC Type:0 Mac:52:54:00:70:fd:ba Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:flannel-284111 Clientid:01:52:54:00:70:fd:ba}
	I0127 03:16:35.263752  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined IP address 192.168.61.149 and MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:35.264017  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHPort
	I0127 03:16:35.264267  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHKeyPath
	I0127 03:16:35.264480  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHKeyPath
	I0127 03:16:35.264676  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHUsername
	I0127 03:16:35.264978  962802 main.go:141] libmachine: Using SSH client type: native
	I0127 03:16:35.265176  962802 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.149 22 <nil> <nil>}
	I0127 03:16:35.265202  962802 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-284111' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-284111/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-284111' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 03:16:35.385306  962802 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 03:16:35.385334  962802 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20316-897624/.minikube CaCertPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20316-897624/.minikube}
	I0127 03:16:35.385386  962802 buildroot.go:174] setting up certificates
	I0127 03:16:35.385407  962802 provision.go:84] configureAuth start
	I0127 03:16:35.385418  962802 main.go:141] libmachine: (flannel-284111) Calling .GetMachineName
	I0127 03:16:35.385751  962802 main.go:141] libmachine: (flannel-284111) Calling .GetIP
	I0127 03:16:35.388782  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:35.389208  962802 main.go:141] libmachine: (flannel-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:fd:ba", ip: ""} in network mk-flannel-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:16:23 +0000 UTC Type:0 Mac:52:54:00:70:fd:ba Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:flannel-284111 Clientid:01:52:54:00:70:fd:ba}
	I0127 03:16:35.389238  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined IP address 192.168.61.149 and MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:35.389457  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHHostname
	I0127 03:16:35.392115  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:35.392473  962802 main.go:141] libmachine: (flannel-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:fd:ba", ip: ""} in network mk-flannel-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:16:23 +0000 UTC Type:0 Mac:52:54:00:70:fd:ba Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:flannel-284111 Clientid:01:52:54:00:70:fd:ba}
	I0127 03:16:35.392503  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined IP address 192.168.61.149 and MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:35.392646  962802 provision.go:143] copyHostCerts
	I0127 03:16:35.392717  962802 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-897624/.minikube/ca.pem, removing ...
	I0127 03:16:35.392735  962802 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-897624/.minikube/ca.pem
	I0127 03:16:35.392803  962802 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20316-897624/.minikube/ca.pem (1078 bytes)
	I0127 03:16:35.392907  962802 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-897624/.minikube/cert.pem, removing ...
	I0127 03:16:35.392943  962802 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-897624/.minikube/cert.pem
	I0127 03:16:35.392978  962802 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20316-897624/.minikube/cert.pem (1123 bytes)
	I0127 03:16:35.393046  962802 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-897624/.minikube/key.pem, removing ...
	I0127 03:16:35.393053  962802 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-897624/.minikube/key.pem
	I0127 03:16:35.393079  962802 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20316-897624/.minikube/key.pem (1679 bytes)
	I0127 03:16:35.393134  962802 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca-key.pem org=jenkins.flannel-284111 san=[127.0.0.1 192.168.61.149 flannel-284111 localhost minikube]
	I0127 03:16:35.644409  962802 provision.go:177] copyRemoteCerts
	I0127 03:16:35.644477  962802 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 03:16:35.644507  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHHostname
	I0127 03:16:35.647365  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:35.647815  962802 main.go:141] libmachine: (flannel-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:fd:ba", ip: ""} in network mk-flannel-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:16:23 +0000 UTC Type:0 Mac:52:54:00:70:fd:ba Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:flannel-284111 Clientid:01:52:54:00:70:fd:ba}
	I0127 03:16:35.647854  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined IP address 192.168.61.149 and MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:35.648045  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHPort
	I0127 03:16:35.648244  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHKeyPath
	I0127 03:16:35.648448  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHUsername
	I0127 03:16:35.648621  962802 sshutil.go:53] new ssh client: &{IP:192.168.61.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/flannel-284111/id_rsa Username:docker}
	I0127 03:16:35.735175  962802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 03:16:35.759512  962802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0127 03:16:35.782681  962802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 03:16:35.806335  962802 provision.go:87] duration metric: took 420.91349ms to configureAuth
	I0127 03:16:35.806370  962802 buildroot.go:189] setting minikube options for container-runtime
	I0127 03:16:35.806580  962802 config.go:182] Loaded profile config "flannel-284111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 03:16:35.806678  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHHostname
	I0127 03:16:35.809676  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:35.810048  962802 main.go:141] libmachine: (flannel-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:fd:ba", ip: ""} in network mk-flannel-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:16:23 +0000 UTC Type:0 Mac:52:54:00:70:fd:ba Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:flannel-284111 Clientid:01:52:54:00:70:fd:ba}
	I0127 03:16:35.810069  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined IP address 192.168.61.149 and MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:35.810270  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHPort
	I0127 03:16:35.810475  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHKeyPath
	I0127 03:16:35.810642  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHKeyPath
	I0127 03:16:35.810791  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHUsername
	I0127 03:16:35.810947  962802 main.go:141] libmachine: Using SSH client type: native
	I0127 03:16:35.811153  962802 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.149 22 <nil> <nil>}
	I0127 03:16:35.811169  962802 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 03:16:36.045521  962802 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 03:16:36.045551  962802 main.go:141] libmachine: Checking connection to Docker...
	I0127 03:16:36.045560  962802 main.go:141] libmachine: (flannel-284111) Calling .GetURL
	I0127 03:16:36.046912  962802 main.go:141] libmachine: (flannel-284111) DBG | using libvirt version 6000000
	I0127 03:16:36.049619  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:36.050012  962802 main.go:141] libmachine: (flannel-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:fd:ba", ip: ""} in network mk-flannel-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:16:23 +0000 UTC Type:0 Mac:52:54:00:70:fd:ba Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:flannel-284111 Clientid:01:52:54:00:70:fd:ba}
	I0127 03:16:36.050044  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined IP address 192.168.61.149 and MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:36.050189  962802 main.go:141] libmachine: Docker is up and running!
	I0127 03:16:36.050201  962802 main.go:141] libmachine: Reticulating splines...
	I0127 03:16:36.050209  962802 client.go:171] duration metric: took 27.784104036s to LocalClient.Create
	I0127 03:16:36.050240  962802 start.go:167] duration metric: took 27.784196823s to libmachine.API.Create "flannel-284111"
	I0127 03:16:36.050253  962802 start.go:293] postStartSetup for "flannel-284111" (driver="kvm2")
	I0127 03:16:36.050269  962802 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 03:16:36.050288  962802 main.go:141] libmachine: (flannel-284111) Calling .DriverName
	I0127 03:16:36.050563  962802 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 03:16:36.050589  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHHostname
	I0127 03:16:36.052764  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:36.053163  962802 main.go:141] libmachine: (flannel-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:fd:ba", ip: ""} in network mk-flannel-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:16:23 +0000 UTC Type:0 Mac:52:54:00:70:fd:ba Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:flannel-284111 Clientid:01:52:54:00:70:fd:ba}
	I0127 03:16:36.053182  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined IP address 192.168.61.149 and MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:36.053798  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHPort
	I0127 03:16:36.054967  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHKeyPath
	I0127 03:16:36.055122  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHUsername
	I0127 03:16:36.055258  962802 sshutil.go:53] new ssh client: &{IP:192.168.61.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/flannel-284111/id_rsa Username:docker}
	I0127 03:16:36.139560  962802 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 03:16:36.143807  962802 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 03:16:36.143840  962802 filesync.go:126] Scanning /home/jenkins/minikube-integration/20316-897624/.minikube/addons for local assets ...
	I0127 03:16:36.143921  962802 filesync.go:126] Scanning /home/jenkins/minikube-integration/20316-897624/.minikube/files for local assets ...
	I0127 03:16:36.144034  962802 filesync.go:149] local asset: /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem -> 9048892.pem in /etc/ssl/certs
	I0127 03:16:36.144162  962802 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 03:16:36.153995  962802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem --> /etc/ssl/certs/9048892.pem (1708 bytes)
	I0127 03:16:36.177537  962802 start.go:296] duration metric: took 127.267264ms for postStartSetup
	I0127 03:16:36.177626  962802 main.go:141] libmachine: (flannel-284111) Calling .GetConfigRaw
	I0127 03:16:36.178231  962802 main.go:141] libmachine: (flannel-284111) Calling .GetIP
	I0127 03:16:36.181202  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:36.181499  962802 main.go:141] libmachine: (flannel-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:fd:ba", ip: ""} in network mk-flannel-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:16:23 +0000 UTC Type:0 Mac:52:54:00:70:fd:ba Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:flannel-284111 Clientid:01:52:54:00:70:fd:ba}
	I0127 03:16:36.181530  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined IP address 192.168.61.149 and MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:36.181755  962802 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/flannel-284111/config.json ...
	I0127 03:16:36.182016  962802 start.go:128] duration metric: took 27.937670872s to createHost
	I0127 03:16:36.182050  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHHostname
	I0127 03:16:36.184501  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:36.184887  962802 main.go:141] libmachine: (flannel-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:fd:ba", ip: ""} in network mk-flannel-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:16:23 +0000 UTC Type:0 Mac:52:54:00:70:fd:ba Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:flannel-284111 Clientid:01:52:54:00:70:fd:ba}
	I0127 03:16:36.184914  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined IP address 192.168.61.149 and MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:36.185157  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHPort
	I0127 03:16:36.185349  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHKeyPath
	I0127 03:16:36.185494  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHKeyPath
	I0127 03:16:36.185672  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHUsername
	I0127 03:16:36.185852  962802 main.go:141] libmachine: Using SSH client type: native
	I0127 03:16:36.186016  962802 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.149 22 <nil> <nil>}
	I0127 03:16:36.186027  962802 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 03:16:36.297719  962802 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737947796.277224514
	
	I0127 03:16:36.297761  962802 fix.go:216] guest clock: 1737947796.277224514
	I0127 03:16:36.297773  962802 fix.go:229] Guest: 2025-01-27 03:16:36.277224514 +0000 UTC Remote: 2025-01-27 03:16:36.182034987 +0000 UTC m=+28.064853759 (delta=95.189527ms)
	I0127 03:16:36.297813  962802 fix.go:200] guest clock delta is within tolerance: 95.189527ms
	I0127 03:16:36.297825  962802 start.go:83] releasing machines lock for "flannel-284111", held for 28.053596986s
	I0127 03:16:36.297857  962802 main.go:141] libmachine: (flannel-284111) Calling .DriverName
	I0127 03:16:36.298179  962802 main.go:141] libmachine: (flannel-284111) Calling .GetIP
	I0127 03:16:36.300917  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:36.301331  962802 main.go:141] libmachine: (flannel-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:fd:ba", ip: ""} in network mk-flannel-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:16:23 +0000 UTC Type:0 Mac:52:54:00:70:fd:ba Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:flannel-284111 Clientid:01:52:54:00:70:fd:ba}
	I0127 03:16:36.301365  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined IP address 192.168.61.149 and MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:36.301535  962802 main.go:141] libmachine: (flannel-284111) Calling .DriverName
	I0127 03:16:36.302050  962802 main.go:141] libmachine: (flannel-284111) Calling .DriverName
	I0127 03:16:36.302215  962802 main.go:141] libmachine: (flannel-284111) Calling .DriverName
	I0127 03:16:36.302376  962802 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 03:16:36.302400  962802 ssh_runner.go:195] Run: cat /version.json
	I0127 03:16:36.302424  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHHostname
	I0127 03:16:36.302435  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHHostname
	I0127 03:16:36.305476  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:36.305543  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:36.305868  962802 main.go:141] libmachine: (flannel-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:fd:ba", ip: ""} in network mk-flannel-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:16:23 +0000 UTC Type:0 Mac:52:54:00:70:fd:ba Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:flannel-284111 Clientid:01:52:54:00:70:fd:ba}
	I0127 03:16:36.305894  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined IP address 192.168.61.149 and MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:36.305920  962802 main.go:141] libmachine: (flannel-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:fd:ba", ip: ""} in network mk-flannel-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:16:23 +0000 UTC Type:0 Mac:52:54:00:70:fd:ba Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:flannel-284111 Clientid:01:52:54:00:70:fd:ba}
	I0127 03:16:36.305933  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined IP address 192.168.61.149 and MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:36.306132  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHPort
	I0127 03:16:36.306146  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHPort
	I0127 03:16:36.306398  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHKeyPath
	I0127 03:16:36.306406  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHKeyPath
	I0127 03:16:36.306623  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHUsername
	I0127 03:16:36.306635  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHUsername
	I0127 03:16:36.306855  962802 sshutil.go:53] new ssh client: &{IP:192.168.61.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/flannel-284111/id_rsa Username:docker}
	I0127 03:16:36.306853  962802 sshutil.go:53] new ssh client: &{IP:192.168.61.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/flannel-284111/id_rsa Username:docker}
	I0127 03:16:36.422211  962802 ssh_runner.go:195] Run: systemctl --version
	I0127 03:16:36.428102  962802 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 03:16:36.584310  962802 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 03:16:36.590590  962802 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 03:16:36.590662  962802 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 03:16:36.606143  962802 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 03:16:36.606175  962802 start.go:495] detecting cgroup driver to use...
	I0127 03:16:36.606255  962802 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 03:16:36.621644  962802 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 03:16:36.635920  962802 docker.go:217] disabling cri-docker service (if available) ...
	I0127 03:16:36.635996  962802 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 03:16:36.651363  962802 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 03:16:36.666218  962802 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 03:16:36.796152  962802 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 03:16:36.947505  962802 docker.go:233] disabling docker service ...
	I0127 03:16:36.947588  962802 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 03:16:36.964349  962802 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 03:16:36.977452  962802 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 03:16:37.100053  962802 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 03:16:37.216907  962802 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 03:16:37.230738  962802 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 03:16:37.248393  962802 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 03:16:37.248477  962802 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:16:37.260459  962802 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 03:16:37.260557  962802 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:16:37.271153  962802 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:16:37.281752  962802 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:16:37.291889  962802 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 03:16:37.304195  962802 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:16:37.316202  962802 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:16:37.334687  962802 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:16:37.345195  962802 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 03:16:37.355778  962802 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 03:16:37.355857  962802 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 03:16:37.369410  962802 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 03:16:37.379382  962802 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 03:16:37.489836  962802 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 03:16:37.584399  962802 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 03:16:37.584490  962802 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 03:16:37.588840  962802 start.go:563] Will wait 60s for crictl version
	I0127 03:16:37.588902  962802 ssh_runner.go:195] Run: which crictl
	I0127 03:16:37.592334  962802 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 03:16:37.630562  962802 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 03:16:37.630643  962802 ssh_runner.go:195] Run: crio --version
	I0127 03:16:37.659019  962802 ssh_runner.go:195] Run: crio --version
	I0127 03:16:37.689311  962802 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 03:16:37.690438  962802 main.go:141] libmachine: (flannel-284111) Calling .GetIP
	I0127 03:16:37.693271  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:37.693520  962802 main.go:141] libmachine: (flannel-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:fd:ba", ip: ""} in network mk-flannel-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:16:23 +0000 UTC Type:0 Mac:52:54:00:70:fd:ba Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:flannel-284111 Clientid:01:52:54:00:70:fd:ba}
	I0127 03:16:37.693549  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined IP address 192.168.61.149 and MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:37.693717  962802 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0127 03:16:37.697674  962802 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 03:16:37.709383  962802 kubeadm.go:883] updating cluster {Name:flannel-284111 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:flannel-284111 Namespace:default APISer
verHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.61.149 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 03:16:37.709499  962802 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 03:16:37.709556  962802 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 03:16:37.749059  962802 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0127 03:16:37.749133  962802 ssh_runner.go:195] Run: which lz4
	I0127 03:16:37.752955  962802 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 03:16:37.756970  962802 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 03:16:37.757028  962802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0127 03:16:39.104712  962802 crio.go:462] duration metric: took 1.351806801s to copy over tarball
	I0127 03:16:39.104831  962802 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 03:16:41.433468  962802 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.328596078s)
	I0127 03:16:41.433506  962802 crio.go:469] duration metric: took 2.328750378s to extract the tarball
	I0127 03:16:41.433517  962802 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 03:16:41.470785  962802 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 03:16:41.519514  962802 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 03:16:41.519544  962802 cache_images.go:84] Images are preloaded, skipping loading
	I0127 03:16:41.519555  962802 kubeadm.go:934] updating node { 192.168.61.149 8443 v1.32.1 crio true true} ...
	I0127 03:16:41.519706  962802 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-284111 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.149
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:flannel-284111 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I0127 03:16:41.519841  962802 ssh_runner.go:195] Run: crio config
	I0127 03:16:41.570263  962802 cni.go:84] Creating CNI manager for "flannel"
	I0127 03:16:41.570290  962802 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 03:16:41.570322  962802 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.149 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-284111 NodeName:flannel-284111 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.149"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.149 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 03:16:41.570474  962802 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.149
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-284111"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.149"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.149"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 03:16:41.570555  962802 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 03:16:41.580959  962802 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 03:16:41.581047  962802 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 03:16:41.590889  962802 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0127 03:16:41.607310  962802 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 03:16:41.624230  962802 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I0127 03:16:41.642532  962802 ssh_runner.go:195] Run: grep 192.168.61.149	control-plane.minikube.internal$ /etc/hosts
	I0127 03:16:41.646422  962802 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.149	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 03:16:41.659655  962802 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 03:16:41.785670  962802 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 03:16:41.803572  962802 certs.go:68] Setting up /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/flannel-284111 for IP: 192.168.61.149
	I0127 03:16:41.803595  962802 certs.go:194] generating shared ca certs ...
	I0127 03:16:41.803616  962802 certs.go:226] acquiring lock for ca certs: {Name:mk2a0a86c4d196cb3cdcd1c836209954479044e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:16:41.803760  962802 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20316-897624/.minikube/ca.key
	I0127 03:16:41.803826  962802 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20316-897624/.minikube/proxy-client-ca.key
	I0127 03:16:41.803839  962802 certs.go:256] generating profile certs ...
	I0127 03:16:41.803911  962802 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/flannel-284111/client.key
	I0127 03:16:41.803942  962802 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/flannel-284111/client.crt with IP's: []
	I0127 03:16:41.884091  962802 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/flannel-284111/client.crt ...
	I0127 03:16:41.884125  962802 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/flannel-284111/client.crt: {Name:mk0b24b2ae6348f2f3b06c5d3a9a91811a3b2029 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:16:41.884842  962802 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/flannel-284111/client.key ...
	I0127 03:16:41.884860  962802 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/flannel-284111/client.key: {Name:mke35d6fe69c31e99b67e91a0d17a24ee5c05ff0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:16:41.885448  962802 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/flannel-284111/apiserver.key.bdbece9b
	I0127 03:16:41.885489  962802 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/flannel-284111/apiserver.crt.bdbece9b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.149]
	I0127 03:16:42.025640  962802 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/flannel-284111/apiserver.crt.bdbece9b ...
	I0127 03:16:42.025675  962802 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/flannel-284111/apiserver.crt.bdbece9b: {Name:mk92a9feac063eaf93ab586cc5002c0d0693248f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:16:42.026377  962802 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/flannel-284111/apiserver.key.bdbece9b ...
	I0127 03:16:42.026402  962802 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/flannel-284111/apiserver.key.bdbece9b: {Name:mk86c3e43f5aeed045ae61ba5463be672ec71844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:16:42.026521  962802 certs.go:381] copying /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/flannel-284111/apiserver.crt.bdbece9b -> /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/flannel-284111/apiserver.crt
	I0127 03:16:42.026612  962802 certs.go:385] copying /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/flannel-284111/apiserver.key.bdbece9b -> /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/flannel-284111/apiserver.key
	I0127 03:16:42.026676  962802 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/flannel-284111/proxy-client.key
	I0127 03:16:42.026694  962802 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/flannel-284111/proxy-client.crt with IP's: []
	I0127 03:16:42.144997  962802 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/flannel-284111/proxy-client.crt ...
	I0127 03:16:42.145045  962802 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/flannel-284111/proxy-client.crt: {Name:mk33a23c9eb404ae8020dc4f0459581321f4029d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:16:42.145953  962802 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/flannel-284111/proxy-client.key ...
	I0127 03:16:42.145986  962802 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/flannel-284111/proxy-client.key: {Name:mka06b2a56e2c23d29255a86609c25063941ea64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:16:42.146843  962802 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/904889.pem (1338 bytes)
	W0127 03:16:42.146904  962802 certs.go:480] ignoring /home/jenkins/minikube-integration/20316-897624/.minikube/certs/904889_empty.pem, impossibly tiny 0 bytes
	I0127 03:16:42.146920  962802 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 03:16:42.146953  962802 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem (1078 bytes)
	I0127 03:16:42.146987  962802 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/cert.pem (1123 bytes)
	I0127 03:16:42.147018  962802 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/key.pem (1679 bytes)
	I0127 03:16:42.147080  962802 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem (1708 bytes)
	I0127 03:16:42.147947  962802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 03:16:42.174259  962802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 03:16:42.197548  962802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 03:16:42.220963  962802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 03:16:42.244764  962802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/flannel-284111/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0127 03:16:42.271604  962802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/flannel-284111/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 03:16:42.296085  962802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/flannel-284111/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 03:16:42.321606  962802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/flannel-284111/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 03:16:42.345887  962802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem --> /usr/share/ca-certificates/9048892.pem (1708 bytes)
	I0127 03:16:42.370341  962802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 03:16:42.393314  962802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/certs/904889.pem --> /usr/share/ca-certificates/904889.pem (1338 bytes)
	I0127 03:16:42.418358  962802 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 03:16:42.436694  962802 ssh_runner.go:195] Run: openssl version
	I0127 03:16:42.442404  962802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9048892.pem && ln -fs /usr/share/ca-certificates/9048892.pem /etc/ssl/certs/9048892.pem"
	I0127 03:16:42.454884  962802 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9048892.pem
	I0127 03:16:42.460401  962802 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 01:57 /usr/share/ca-certificates/9048892.pem
	I0127 03:16:42.460471  962802 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9048892.pem
	I0127 03:16:42.466366  962802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9048892.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 03:16:42.479598  962802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 03:16:42.491770  962802 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 03:16:42.496408  962802 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 01:48 /usr/share/ca-certificates/minikubeCA.pem
	I0127 03:16:42.496488  962802 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 03:16:42.504623  962802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 03:16:42.521182  962802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/904889.pem && ln -fs /usr/share/ca-certificates/904889.pem /etc/ssl/certs/904889.pem"
	I0127 03:16:42.534398  962802 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/904889.pem
	I0127 03:16:42.539574  962802 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 01:57 /usr/share/ca-certificates/904889.pem
	I0127 03:16:42.539646  962802 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/904889.pem
	I0127 03:16:42.547204  962802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/904889.pem /etc/ssl/certs/51391683.0"
	I0127 03:16:42.558022  962802 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 03:16:42.561919  962802 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 03:16:42.561980  962802 kubeadm.go:392] StartCluster: {Name:flannel-284111 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:flannel-284111 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.61.149 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 03:16:42.562058  962802 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 03:16:42.562103  962802 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 03:16:42.600455  962802 cri.go:89] found id: ""
	I0127 03:16:42.600554  962802 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 03:16:42.610770  962802 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 03:16:42.622936  962802 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 03:16:42.634976  962802 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 03:16:42.635004  962802 kubeadm.go:157] found existing configuration files:
	
	I0127 03:16:42.635085  962802 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 03:16:42.644529  962802 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 03:16:42.644592  962802 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 03:16:42.655562  962802 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 03:16:42.667115  962802 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 03:16:42.667191  962802 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 03:16:42.678073  962802 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 03:16:42.687580  962802 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 03:16:42.687657  962802 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 03:16:42.697190  962802 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 03:16:42.706251  962802 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 03:16:42.706325  962802 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 03:16:42.718021  962802 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 03:16:42.869340  962802 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 03:16:52.279423  962802 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 03:16:52.279508  962802 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 03:16:52.279597  962802 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 03:16:52.279700  962802 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 03:16:52.279809  962802 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 03:16:52.279875  962802 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 03:16:52.281630  962802 out.go:235]   - Generating certificates and keys ...
	I0127 03:16:52.281707  962802 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 03:16:52.281776  962802 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 03:16:52.281840  962802 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 03:16:52.281891  962802 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0127 03:16:52.281946  962802 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0127 03:16:52.281998  962802 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0127 03:16:52.282045  962802 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0127 03:16:52.282147  962802 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [flannel-284111 localhost] and IPs [192.168.61.149 127.0.0.1 ::1]
	I0127 03:16:52.282228  962802 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0127 03:16:52.282409  962802 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [flannel-284111 localhost] and IPs [192.168.61.149 127.0.0.1 ::1]
	I0127 03:16:52.282492  962802 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 03:16:52.282586  962802 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 03:16:52.282658  962802 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0127 03:16:52.282736  962802 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 03:16:52.282817  962802 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 03:16:52.282914  962802 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 03:16:52.282998  962802 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 03:16:52.283084  962802 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 03:16:52.283163  962802 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 03:16:52.283276  962802 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 03:16:52.283376  962802 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 03:16:52.285819  962802 out.go:235]   - Booting up control plane ...
	I0127 03:16:52.285922  962802 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 03:16:52.286023  962802 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 03:16:52.286112  962802 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 03:16:52.286240  962802 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 03:16:52.286341  962802 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 03:16:52.286401  962802 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 03:16:52.286520  962802 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 03:16:52.286611  962802 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 03:16:52.286676  962802 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.111963ms
	I0127 03:16:52.286739  962802 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 03:16:52.286788  962802 kubeadm.go:310] [api-check] The API server is healthy after 4.501579746s
	I0127 03:16:52.286885  962802 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 03:16:52.286994  962802 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 03:16:52.287044  962802 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 03:16:52.287238  962802 kubeadm.go:310] [mark-control-plane] Marking the node flannel-284111 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 03:16:52.287304  962802 kubeadm.go:310] [bootstrap-token] Using token: 0oz78x.8hnks99o5jp7cmuy
	I0127 03:16:52.289629  962802 out.go:235]   - Configuring RBAC rules ...
	I0127 03:16:52.289730  962802 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 03:16:52.289805  962802 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 03:16:52.289935  962802 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 03:16:52.290056  962802 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 03:16:52.290160  962802 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 03:16:52.290244  962802 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 03:16:52.290336  962802 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 03:16:52.290371  962802 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 03:16:52.290415  962802 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 03:16:52.290422  962802 kubeadm.go:310] 
	I0127 03:16:52.290467  962802 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 03:16:52.290473  962802 kubeadm.go:310] 
	I0127 03:16:52.290541  962802 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 03:16:52.290549  962802 kubeadm.go:310] 
	I0127 03:16:52.290569  962802 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 03:16:52.290621  962802 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 03:16:52.290662  962802 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 03:16:52.290667  962802 kubeadm.go:310] 
	I0127 03:16:52.290708  962802 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 03:16:52.290725  962802 kubeadm.go:310] 
	I0127 03:16:52.290782  962802 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 03:16:52.290789  962802 kubeadm.go:310] 
	I0127 03:16:52.290835  962802 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 03:16:52.290894  962802 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 03:16:52.290982  962802 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 03:16:52.290993  962802 kubeadm.go:310] 
	I0127 03:16:52.291100  962802 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 03:16:52.291210  962802 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 03:16:52.291219  962802 kubeadm.go:310] 
	I0127 03:16:52.291310  962802 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 0oz78x.8hnks99o5jp7cmuy \
	I0127 03:16:52.291436  962802 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9e03cefec8c3986c8071c28cf6fb7c7bbd45830c579dcdbae8e6ec1676091320 \
	I0127 03:16:52.291469  962802 kubeadm.go:310] 	--control-plane 
	I0127 03:16:52.291485  962802 kubeadm.go:310] 
	I0127 03:16:52.291569  962802 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 03:16:52.291576  962802 kubeadm.go:310] 
	I0127 03:16:52.291650  962802 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 0oz78x.8hnks99o5jp7cmuy \
	I0127 03:16:52.291744  962802 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9e03cefec8c3986c8071c28cf6fb7c7bbd45830c579dcdbae8e6ec1676091320 
	I0127 03:16:52.291755  962802 cni.go:84] Creating CNI manager for "flannel"
	I0127 03:16:52.293224  962802 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I0127 03:16:52.294632  962802 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0127 03:16:52.300077  962802 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.1/kubectl ...
	I0127 03:16:52.300100  962802 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4345 bytes)
	I0127 03:16:52.322914  962802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0127 03:16:52.808322  962802 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 03:16:52.808399  962802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:16:52.808405  962802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-284111 minikube.k8s.io/updated_at=2025_01_27T03_16_52_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=6bb462d349d93b9bf1c5a4f87817e5e9ea11cc95 minikube.k8s.io/name=flannel-284111 minikube.k8s.io/primary=true
	I0127 03:16:52.951072  962802 ops.go:34] apiserver oom_adj: -16
	I0127 03:16:52.951367  962802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:16:53.451527  962802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:16:53.951736  962802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:16:54.451740  962802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:16:54.951721  962802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:16:55.452279  962802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:16:55.951897  962802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:16:56.451490  962802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:16:56.555185  962802 kubeadm.go:1113] duration metric: took 3.746855928s to wait for elevateKubeSystemPrivileges
	I0127 03:16:56.555221  962802 kubeadm.go:394] duration metric: took 13.993246394s to StartCluster
	I0127 03:16:56.555241  962802 settings.go:142] acquiring lock: {Name:mk329e04da29656a59534015ea2d4cccfe5debac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:16:56.555313  962802 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20316-897624/kubeconfig
	I0127 03:16:56.556591  962802 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/kubeconfig: {Name:mkcd87672341028e989830590061bc5270733978 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:16:56.556813  962802 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.149 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 03:16:56.556844  962802 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0127 03:16:56.556860  962802 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 03:16:56.556980  962802 addons.go:69] Setting storage-provisioner=true in profile "flannel-284111"
	I0127 03:16:56.556999  962802 addons.go:69] Setting default-storageclass=true in profile "flannel-284111"
	I0127 03:16:56.557018  962802 addons.go:238] Setting addon storage-provisioner=true in "flannel-284111"
	I0127 03:16:56.557043  962802 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-284111"
	I0127 03:16:56.557065  962802 host.go:66] Checking if "flannel-284111" exists ...
	I0127 03:16:56.557100  962802 config.go:182] Loaded profile config "flannel-284111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 03:16:56.557541  962802 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:16:56.557566  962802 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:16:56.557591  962802 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:16:56.557593  962802 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:16:56.558493  962802 out.go:177] * Verifying Kubernetes components...
	I0127 03:16:56.560014  962802 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 03:16:56.574807  962802 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35993
	I0127 03:16:56.574841  962802 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38295
	I0127 03:16:56.575285  962802 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:16:56.575306  962802 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:16:56.575915  962802 main.go:141] libmachine: Using API Version  1
	I0127 03:16:56.575938  962802 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:16:56.576174  962802 main.go:141] libmachine: Using API Version  1
	I0127 03:16:56.576197  962802 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:16:56.576305  962802 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:16:56.576514  962802 main.go:141] libmachine: (flannel-284111) Calling .GetState
	I0127 03:16:56.576571  962802 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:16:56.577140  962802 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:16:56.577172  962802 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:16:56.580537  962802 addons.go:238] Setting addon default-storageclass=true in "flannel-284111"
	I0127 03:16:56.580598  962802 host.go:66] Checking if "flannel-284111" exists ...
	I0127 03:16:56.581044  962802 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:16:56.581082  962802 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:16:56.594495  962802 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44069
	I0127 03:16:56.595379  962802 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:16:56.596637  962802 main.go:141] libmachine: Using API Version  1
	I0127 03:16:56.596759  962802 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:16:56.596976  962802 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35577
	I0127 03:16:56.597586  962802 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:16:56.597968  962802 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:16:56.598305  962802 main.go:141] libmachine: (flannel-284111) Calling .GetState
	I0127 03:16:56.598604  962802 main.go:141] libmachine: Using API Version  1
	I0127 03:16:56.598628  962802 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:16:56.599010  962802 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:16:56.599737  962802 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:16:56.599810  962802 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:16:56.600023  962802 main.go:141] libmachine: (flannel-284111) Calling .DriverName
	I0127 03:16:56.601643  962802 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 03:16:56.602877  962802 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 03:16:56.602904  962802 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 03:16:56.602926  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHHostname
	I0127 03:16:56.606273  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:56.606678  962802 main.go:141] libmachine: (flannel-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:fd:ba", ip: ""} in network mk-flannel-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:16:23 +0000 UTC Type:0 Mac:52:54:00:70:fd:ba Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:flannel-284111 Clientid:01:52:54:00:70:fd:ba}
	I0127 03:16:56.606703  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined IP address 192.168.61.149 and MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:56.606843  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHPort
	I0127 03:16:56.607047  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHKeyPath
	I0127 03:16:56.607201  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHUsername
	I0127 03:16:56.607333  962802 sshutil.go:53] new ssh client: &{IP:192.168.61.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/flannel-284111/id_rsa Username:docker}
	I0127 03:16:56.616885  962802 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46031
	I0127 03:16:56.617386  962802 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:16:56.617949  962802 main.go:141] libmachine: Using API Version  1
	I0127 03:16:56.617980  962802 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:16:56.618381  962802 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:16:56.618596  962802 main.go:141] libmachine: (flannel-284111) Calling .GetState
	I0127 03:16:56.620245  962802 main.go:141] libmachine: (flannel-284111) Calling .DriverName
	I0127 03:16:56.620473  962802 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 03:16:56.620494  962802 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 03:16:56.620518  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHHostname
	I0127 03:16:56.623570  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:56.623974  962802 main.go:141] libmachine: (flannel-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:fd:ba", ip: ""} in network mk-flannel-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:16:23 +0000 UTC Type:0 Mac:52:54:00:70:fd:ba Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:flannel-284111 Clientid:01:52:54:00:70:fd:ba}
	I0127 03:16:56.624001  962802 main.go:141] libmachine: (flannel-284111) DBG | domain flannel-284111 has defined IP address 192.168.61.149 and MAC address 52:54:00:70:fd:ba in network mk-flannel-284111
	I0127 03:16:56.624252  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHPort
	I0127 03:16:56.624494  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHKeyPath
	I0127 03:16:56.624690  962802 main.go:141] libmachine: (flannel-284111) Calling .GetSSHUsername
	I0127 03:16:56.624842  962802 sshutil.go:53] new ssh client: &{IP:192.168.61.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/flannel-284111/id_rsa Username:docker}
	I0127 03:16:56.767420  962802 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0127 03:16:56.816191  962802 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 03:16:56.928028  962802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 03:16:56.995287  962802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 03:16:57.155327  962802 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0127 03:16:57.158720  962802 node_ready.go:35] waiting up to 15m0s for node "flannel-284111" to be "Ready" ...
	I0127 03:16:57.284119  962802 main.go:141] libmachine: Making call to close driver server
	I0127 03:16:57.284156  962802 main.go:141] libmachine: (flannel-284111) Calling .Close
	I0127 03:16:57.284477  962802 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:16:57.284498  962802 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:16:57.284509  962802 main.go:141] libmachine: Making call to close driver server
	I0127 03:16:57.284517  962802 main.go:141] libmachine: (flannel-284111) Calling .Close
	I0127 03:16:57.284785  962802 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:16:57.284811  962802 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:16:57.284819  962802 main.go:141] libmachine: (flannel-284111) DBG | Closing plugin on server side
	I0127 03:16:57.295887  962802 main.go:141] libmachine: Making call to close driver server
	I0127 03:16:57.295917  962802 main.go:141] libmachine: (flannel-284111) Calling .Close
	I0127 03:16:57.296250  962802 main.go:141] libmachine: (flannel-284111) DBG | Closing plugin on server side
	I0127 03:16:57.296276  962802 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:16:57.296294  962802 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:16:57.502948  962802 main.go:141] libmachine: Making call to close driver server
	I0127 03:16:57.502979  962802 main.go:141] libmachine: (flannel-284111) Calling .Close
	I0127 03:16:57.503364  962802 main.go:141] libmachine: (flannel-284111) DBG | Closing plugin on server side
	I0127 03:16:57.503411  962802 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:16:57.503429  962802 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:16:57.503440  962802 main.go:141] libmachine: Making call to close driver server
	I0127 03:16:57.503454  962802 main.go:141] libmachine: (flannel-284111) Calling .Close
	I0127 03:16:57.503706  962802 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:16:57.503731  962802 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:16:57.505549  962802 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0127 03:16:57.507238  962802 addons.go:514] duration metric: took 950.372395ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0127 03:16:57.661896  962802 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-284111" context rescaled to 1 replicas
	I0127 03:16:59.162982  962802 node_ready.go:53] node "flannel-284111" has status "Ready":"False"
	I0127 03:17:01.164044  962802 node_ready.go:53] node "flannel-284111" has status "Ready":"False"
	I0127 03:17:03.661915  962802 node_ready.go:53] node "flannel-284111" has status "Ready":"False"
	I0127 03:17:05.664804  962802 node_ready.go:49] node "flannel-284111" has status "Ready":"True"
	I0127 03:17:05.664831  962802 node_ready.go:38] duration metric: took 8.506078406s for node "flannel-284111" to be "Ready" ...
	I0127 03:17:05.664844  962802 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:17:05.680899  962802 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-n84tv" in "kube-system" namespace to be "Ready" ...
	I0127 03:17:07.689359  962802 pod_ready.go:103] pod "coredns-668d6bf9bc-n84tv" in "kube-system" namespace has status "Ready":"False"
	I0127 03:17:09.691009  962802 pod_ready.go:103] pod "coredns-668d6bf9bc-n84tv" in "kube-system" namespace has status "Ready":"False"
	I0127 03:17:12.188408  962802 pod_ready.go:103] pod "coredns-668d6bf9bc-n84tv" in "kube-system" namespace has status "Ready":"False"
	I0127 03:17:14.687853  962802 pod_ready.go:103] pod "coredns-668d6bf9bc-n84tv" in "kube-system" namespace has status "Ready":"False"
	I0127 03:17:16.690451  962802 pod_ready.go:103] pod "coredns-668d6bf9bc-n84tv" in "kube-system" namespace has status "Ready":"False"
	I0127 03:17:19.187851  962802 pod_ready.go:103] pod "coredns-668d6bf9bc-n84tv" in "kube-system" namespace has status "Ready":"False"
	I0127 03:17:21.187911  962802 pod_ready.go:103] pod "coredns-668d6bf9bc-n84tv" in "kube-system" namespace has status "Ready":"False"
	I0127 03:17:22.189050  962802 pod_ready.go:93] pod "coredns-668d6bf9bc-n84tv" in "kube-system" namespace has status "Ready":"True"
	I0127 03:17:22.189089  962802 pod_ready.go:82] duration metric: took 16.508136419s for pod "coredns-668d6bf9bc-n84tv" in "kube-system" namespace to be "Ready" ...
	I0127 03:17:22.189105  962802 pod_ready.go:79] waiting up to 15m0s for pod "etcd-flannel-284111" in "kube-system" namespace to be "Ready" ...
	I0127 03:17:22.193245  962802 pod_ready.go:93] pod "etcd-flannel-284111" in "kube-system" namespace has status "Ready":"True"
	I0127 03:17:22.193268  962802 pod_ready.go:82] duration metric: took 4.155632ms for pod "etcd-flannel-284111" in "kube-system" namespace to be "Ready" ...
	I0127 03:17:22.193276  962802 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-flannel-284111" in "kube-system" namespace to be "Ready" ...
	I0127 03:17:22.199106  962802 pod_ready.go:93] pod "kube-apiserver-flannel-284111" in "kube-system" namespace has status "Ready":"True"
	I0127 03:17:22.199133  962802 pod_ready.go:82] duration metric: took 5.849465ms for pod "kube-apiserver-flannel-284111" in "kube-system" namespace to be "Ready" ...
	I0127 03:17:22.199145  962802 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-flannel-284111" in "kube-system" namespace to be "Ready" ...
	I0127 03:17:22.203376  962802 pod_ready.go:93] pod "kube-controller-manager-flannel-284111" in "kube-system" namespace has status "Ready":"True"
	I0127 03:17:22.203404  962802 pod_ready.go:82] duration metric: took 4.250537ms for pod "kube-controller-manager-flannel-284111" in "kube-system" namespace to be "Ready" ...
	I0127 03:17:22.203416  962802 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-26lg4" in "kube-system" namespace to be "Ready" ...
	I0127 03:17:22.207926  962802 pod_ready.go:93] pod "kube-proxy-26lg4" in "kube-system" namespace has status "Ready":"True"
	I0127 03:17:22.207947  962802 pod_ready.go:82] duration metric: took 4.524499ms for pod "kube-proxy-26lg4" in "kube-system" namespace to be "Ready" ...
	I0127 03:17:22.207956  962802 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-flannel-284111" in "kube-system" namespace to be "Ready" ...
	I0127 03:17:22.585311  962802 pod_ready.go:93] pod "kube-scheduler-flannel-284111" in "kube-system" namespace has status "Ready":"True"
	I0127 03:17:22.585340  962802 pod_ready.go:82] duration metric: took 377.377307ms for pod "kube-scheduler-flannel-284111" in "kube-system" namespace to be "Ready" ...
	I0127 03:17:22.585351  962802 pod_ready.go:39] duration metric: took 16.92049028s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:17:22.585387  962802 api_server.go:52] waiting for apiserver process to appear ...
	I0127 03:17:22.585453  962802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:17:22.609073  962802 api_server.go:72] duration metric: took 26.052224724s to wait for apiserver process to appear ...
	I0127 03:17:22.609107  962802 api_server.go:88] waiting for apiserver healthz status ...
	I0127 03:17:22.609131  962802 api_server.go:253] Checking apiserver healthz at https://192.168.61.149:8443/healthz ...
	I0127 03:17:22.617948  962802 api_server.go:279] https://192.168.61.149:8443/healthz returned 200:
	ok
	I0127 03:17:22.619195  962802 api_server.go:141] control plane version: v1.32.1
	I0127 03:17:22.619223  962802 api_server.go:131] duration metric: took 10.106693ms to wait for apiserver health ...
	I0127 03:17:22.619235  962802 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 03:17:22.787866  962802 system_pods.go:59] 7 kube-system pods found
	I0127 03:17:22.787902  962802 system_pods.go:61] "coredns-668d6bf9bc-n84tv" [e346e1cb-3f94-42b4-9742-2ee940f826d2] Running
	I0127 03:17:22.787907  962802 system_pods.go:61] "etcd-flannel-284111" [eac8552b-7591-423e-a215-28997fd8a723] Running
	I0127 03:17:22.787912  962802 system_pods.go:61] "kube-apiserver-flannel-284111" [cd93f723-d6dc-433c-b4d3-0b772e9f6ad5] Running
	I0127 03:17:22.787917  962802 system_pods.go:61] "kube-controller-manager-flannel-284111" [0df400fc-e063-468b-8082-e9810ab1cb66] Running
	I0127 03:17:22.787921  962802 system_pods.go:61] "kube-proxy-26lg4" [2ac24482-bd59-41df-a2df-8592baee3318] Running
	I0127 03:17:22.787924  962802 system_pods.go:61] "kube-scheduler-flannel-284111" [9e4efbf4-83e8-42e0-93ff-08458a01edd6] Running
	I0127 03:17:22.787926  962802 system_pods.go:61] "storage-provisioner" [8ab916e8-ab37-42d6-bede-37b8a64843b3] Running
	I0127 03:17:22.787933  962802 system_pods.go:74] duration metric: took 168.690727ms to wait for pod list to return data ...
	I0127 03:17:22.787944  962802 default_sa.go:34] waiting for default service account to be created ...
	I0127 03:17:22.985329  962802 default_sa.go:45] found service account: "default"
	I0127 03:17:22.985363  962802 default_sa.go:55] duration metric: took 197.411691ms for default service account to be created ...
	I0127 03:17:22.985375  962802 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 03:17:23.187747  962802 system_pods.go:87] 7 kube-system pods found
	I0127 03:17:23.385785  962802 system_pods.go:105] "coredns-668d6bf9bc-n84tv" [e346e1cb-3f94-42b4-9742-2ee940f826d2] Running
	I0127 03:17:23.385815  962802 system_pods.go:105] "etcd-flannel-284111" [eac8552b-7591-423e-a215-28997fd8a723] Running
	I0127 03:17:23.385821  962802 system_pods.go:105] "kube-apiserver-flannel-284111" [cd93f723-d6dc-433c-b4d3-0b772e9f6ad5] Running
	I0127 03:17:23.385826  962802 system_pods.go:105] "kube-controller-manager-flannel-284111" [0df400fc-e063-468b-8082-e9810ab1cb66] Running
	I0127 03:17:23.385830  962802 system_pods.go:105] "kube-proxy-26lg4" [2ac24482-bd59-41df-a2df-8592baee3318] Running
	I0127 03:17:23.385835  962802 system_pods.go:105] "kube-scheduler-flannel-284111" [9e4efbf4-83e8-42e0-93ff-08458a01edd6] Running
	I0127 03:17:23.385839  962802 system_pods.go:105] "storage-provisioner" [8ab916e8-ab37-42d6-bede-37b8a64843b3] Running
	I0127 03:17:23.385847  962802 system_pods.go:147] duration metric: took 400.464976ms to wait for k8s-apps to be running ...
	I0127 03:17:23.385855  962802 system_svc.go:44] waiting for kubelet service to be running ....
	I0127 03:17:23.385904  962802 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 03:17:23.401918  962802 system_svc.go:56] duration metric: took 16.050536ms WaitForService to wait for kubelet
	I0127 03:17:23.401958  962802 kubeadm.go:582] duration metric: took 26.845113476s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 03:17:23.401987  962802 node_conditions.go:102] verifying NodePressure condition ...
	I0127 03:17:23.585857  962802 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 03:17:23.585890  962802 node_conditions.go:123] node cpu capacity is 2
	I0127 03:17:23.585906  962802 node_conditions.go:105] duration metric: took 183.914106ms to run NodePressure ...
	I0127 03:17:23.585939  962802 start.go:241] waiting for startup goroutines ...
	I0127 03:17:23.585951  962802 start.go:246] waiting for cluster config update ...
	I0127 03:17:23.585971  962802 start.go:255] writing updated cluster config ...
	I0127 03:17:23.586272  962802 ssh_runner.go:195] Run: rm -f paused
	I0127 03:17:23.641233  962802 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 03:17:23.642864  962802 out.go:177] * Done! kubectl is now configured to use "flannel-284111" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 27 03:17:26 old-k8s-version-542356 crio[627]: time="2025-01-27 03:17:26.493227809Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737947846493199619,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=75c09775-5f2a-46e6-85b1-bdf1b59560e8 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 03:17:26 old-k8s-version-542356 crio[627]: time="2025-01-27 03:17:26.493706135Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0d328a17-0fa7-4816-a536-3b72dc9b1070 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:17:26 old-k8s-version-542356 crio[627]: time="2025-01-27 03:17:26.493757662Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0d328a17-0fa7-4816-a536-3b72dc9b1070 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:17:26 old-k8s-version-542356 crio[627]: time="2025-01-27 03:17:26.493801781Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0d328a17-0fa7-4816-a536-3b72dc9b1070 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:17:26 old-k8s-version-542356 crio[627]: time="2025-01-27 03:17:26.524467325Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5d8d70dc-a376-4647-98d2-8fa152ac24de name=/runtime.v1.RuntimeService/Version
	Jan 27 03:17:26 old-k8s-version-542356 crio[627]: time="2025-01-27 03:17:26.524603599Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5d8d70dc-a376-4647-98d2-8fa152ac24de name=/runtime.v1.RuntimeService/Version
	Jan 27 03:17:26 old-k8s-version-542356 crio[627]: time="2025-01-27 03:17:26.526380189Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f75e5cc6-b490-4469-98c4-940f536342be name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 03:17:26 old-k8s-version-542356 crio[627]: time="2025-01-27 03:17:26.526814202Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737947846526793150,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f75e5cc6-b490-4469-98c4-940f536342be name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 03:17:26 old-k8s-version-542356 crio[627]: time="2025-01-27 03:17:26.527433921Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=93775ad3-7ac5-4354-9918-8ac12990a03c name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:17:26 old-k8s-version-542356 crio[627]: time="2025-01-27 03:17:26.527488560Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=93775ad3-7ac5-4354-9918-8ac12990a03c name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:17:26 old-k8s-version-542356 crio[627]: time="2025-01-27 03:17:26.527530780Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=93775ad3-7ac5-4354-9918-8ac12990a03c name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:17:26 old-k8s-version-542356 crio[627]: time="2025-01-27 03:17:26.558918537Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=30f962d1-6a8a-4a43-819f-ecae2e035954 name=/runtime.v1.RuntimeService/Version
	Jan 27 03:17:26 old-k8s-version-542356 crio[627]: time="2025-01-27 03:17:26.558997840Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=30f962d1-6a8a-4a43-819f-ecae2e035954 name=/runtime.v1.RuntimeService/Version
	Jan 27 03:17:26 old-k8s-version-542356 crio[627]: time="2025-01-27 03:17:26.560081756Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=15935067-b460-40ec-b617-3c18b9ec18a5 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 03:17:26 old-k8s-version-542356 crio[627]: time="2025-01-27 03:17:26.560452438Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737947846560428539,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=15935067-b460-40ec-b617-3c18b9ec18a5 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 03:17:26 old-k8s-version-542356 crio[627]: time="2025-01-27 03:17:26.560987906Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e2568dbe-b63e-49ca-8f66-89eea5ffbceb name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:17:26 old-k8s-version-542356 crio[627]: time="2025-01-27 03:17:26.561055872Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e2568dbe-b63e-49ca-8f66-89eea5ffbceb name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:17:26 old-k8s-version-542356 crio[627]: time="2025-01-27 03:17:26.561100347Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e2568dbe-b63e-49ca-8f66-89eea5ffbceb name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:17:26 old-k8s-version-542356 crio[627]: time="2025-01-27 03:17:26.591436941Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b34bf0f4-72a0-498d-8755-8fdf1094ce46 name=/runtime.v1.RuntimeService/Version
	Jan 27 03:17:26 old-k8s-version-542356 crio[627]: time="2025-01-27 03:17:26.591514101Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b34bf0f4-72a0-498d-8755-8fdf1094ce46 name=/runtime.v1.RuntimeService/Version
	Jan 27 03:17:26 old-k8s-version-542356 crio[627]: time="2025-01-27 03:17:26.592777186Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ebda9010-c214-4a01-9bc7-3743e556adbc name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 03:17:26 old-k8s-version-542356 crio[627]: time="2025-01-27 03:17:26.593155516Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737947846593128008,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ebda9010-c214-4a01-9bc7-3743e556adbc name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 03:17:26 old-k8s-version-542356 crio[627]: time="2025-01-27 03:17:26.593679821Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=24028c20-4c4c-4b23-8c92-c858c805af6e name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:17:26 old-k8s-version-542356 crio[627]: time="2025-01-27 03:17:26.593732598Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=24028c20-4c4c-4b23-8c92-c858c805af6e name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:17:26 old-k8s-version-542356 crio[627]: time="2025-01-27 03:17:26.593766339Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=24028c20-4c4c-4b23-8c92-c858c805af6e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jan27 03:00] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053509] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038521] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.063892] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.073990] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.603879] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.822966] systemd-fstab-generator[555]: Ignoring "noauto" option for root device
	[  +0.059849] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073801] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.176250] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.120814] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.231774] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +6.304285] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +0.064590] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.124064] systemd-fstab-generator[1004]: Ignoring "noauto" option for root device
	[ +12.435199] kauditd_printk_skb: 46 callbacks suppressed
	[Jan27 03:04] systemd-fstab-generator[5044]: Ignoring "noauto" option for root device
	[Jan27 03:06] systemd-fstab-generator[5323]: Ignoring "noauto" option for root device
	[  +0.074098] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 03:17:26 up 17 min,  0 users,  load average: 0.00, 0.04, 0.03
	Linux old-k8s-version-542356 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jan 27 03:17:24 old-k8s-version-542356 kubelet[6501]: internal/singleflight.(*Group).doCall(0x70c5750, 0xc000c2a5a0, 0xc000ba3bf0, 0x23, 0xc000c2c500)
	Jan 27 03:17:24 old-k8s-version-542356 kubelet[6501]:         /usr/local/go/src/internal/singleflight/singleflight.go:95 +0x2e
	Jan 27 03:17:24 old-k8s-version-542356 kubelet[6501]: created by internal/singleflight.(*Group).DoChan
	Jan 27 03:17:24 old-k8s-version-542356 kubelet[6501]:         /usr/local/go/src/internal/singleflight/singleflight.go:88 +0x2cc
	Jan 27 03:17:24 old-k8s-version-542356 kubelet[6501]: goroutine 141 [runnable]:
	Jan 27 03:17:24 old-k8s-version-542356 kubelet[6501]: net._C2func_getaddrinfo(0xc000bbaca0, 0x0, 0xc000c367e0, 0xc0001e7200, 0x0, 0x0, 0x0)
	Jan 27 03:17:24 old-k8s-version-542356 kubelet[6501]:         _cgo_gotypes.go:94 +0x55
	Jan 27 03:17:24 old-k8s-version-542356 kubelet[6501]: net.cgoLookupIPCNAME.func1(0xc000bbaca0, 0x20, 0x20, 0xc000c367e0, 0xc0001e7200, 0x0, 0xc000c23ea0, 0x57a492)
	Jan 27 03:17:24 old-k8s-version-542356 kubelet[6501]:         /usr/local/go/src/net/cgo_unix.go:161 +0xc5
	Jan 27 03:17:24 old-k8s-version-542356 kubelet[6501]: net.cgoLookupIPCNAME(0x48ab5d6, 0x3, 0xc000ba3bc0, 0x1f, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	Jan 27 03:17:24 old-k8s-version-542356 kubelet[6501]:         /usr/local/go/src/net/cgo_unix.go:161 +0x16b
	Jan 27 03:17:24 old-k8s-version-542356 kubelet[6501]: net.cgoIPLookup(0xc00019dc20, 0x48ab5d6, 0x3, 0xc000ba3bc0, 0x1f)
	Jan 27 03:17:24 old-k8s-version-542356 kubelet[6501]:         /usr/local/go/src/net/cgo_unix.go:218 +0x67
	Jan 27 03:17:24 old-k8s-version-542356 kubelet[6501]: created by net.cgoLookupIP
	Jan 27 03:17:24 old-k8s-version-542356 kubelet[6501]:         /usr/local/go/src/net/cgo_unix.go:228 +0xc7
	Jan 27 03:17:24 old-k8s-version-542356 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 27 03:17:24 old-k8s-version-542356 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 27 03:17:25 old-k8s-version-542356 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Jan 27 03:17:25 old-k8s-version-542356 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 27 03:17:25 old-k8s-version-542356 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 27 03:17:25 old-k8s-version-542356 kubelet[6510]: I0127 03:17:25.115023    6510 server.go:416] Version: v1.20.0
	Jan 27 03:17:25 old-k8s-version-542356 kubelet[6510]: I0127 03:17:25.115479    6510 server.go:837] Client rotation is on, will bootstrap in background
	Jan 27 03:17:25 old-k8s-version-542356 kubelet[6510]: I0127 03:17:25.117436    6510 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 27 03:17:25 old-k8s-version-542356 kubelet[6510]: W0127 03:17:25.118338    6510 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jan 27 03:17:25 old-k8s-version-542356 kubelet[6510]: I0127 03:17:25.118829    6510 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-542356 -n old-k8s-version-542356
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-542356 -n old-k8s-version-542356: exit status 2 (230.421451ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-542356" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (354.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:17:28.357620  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/calico-284111/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:17:28.364137  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/calico-284111/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:17:28.375572  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/calico-284111/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:17:28.397070  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/calico-284111/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:17:28.438535  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/calico-284111/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:17:28.520010  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/calico-284111/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:17:28.681673  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/calico-284111/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:17:29.003372  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/calico-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:17:29.645161  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/calico-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:17:33.489702  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/calico-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:17:38.611353  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/calico-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:17:48.853038  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/calico-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:18:09.334478  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/calico-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:18:22.181966  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/kindnet-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:18:50.296766  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/calico-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:18:51.788096  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/auto-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:19:14.186967  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/custom-flannel-284111/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:19:14.193420  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/custom-flannel-284111/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:19:14.204862  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/custom-flannel-284111/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:19:14.226360  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/custom-flannel-284111/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:19:14.268060  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/custom-flannel-284111/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:19:14.349965  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/custom-flannel-284111/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:19:14.512147  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/custom-flannel-284111/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:19:14.833986  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/custom-flannel-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:19:15.475891  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/custom-flannel-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:19:16.757877  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/custom-flannel-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:19:19.319868  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/custom-flannel-284111/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:19:19.491180  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/auto-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:19:24.442194  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/custom-flannel-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:19:34.683928  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/custom-flannel-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:19:36.650813  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:19:48.566812  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/functional-308251/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:19:55.165836  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/custom-flannel-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:20:12.219148  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/calico-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:20:36.127450  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/custom-flannel-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:20:38.319636  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/kindnet-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:20:42.507982  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/enable-default-cni-284111/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:20:42.514488  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/enable-default-cni-284111/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:20:42.525936  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/enable-default-cni-284111/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:20:42.547358  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/enable-default-cni-284111/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:20:42.588806  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/enable-default-cni-284111/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:20:42.670333  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/enable-default-cni-284111/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:20:42.831916  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/enable-default-cni-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:20:43.153479  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/enable-default-cni-284111/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:20:43.795664  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/enable-default-cni-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:20:45.077097  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/enable-default-cni-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:20:47.638764  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/enable-default-cni-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:20:52.760605  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/enable-default-cni-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:21:03.002315  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/enable-default-cni-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:21:06.023917  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/kindnet-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:21:23.483801  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/enable-default-cni-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:21:33.566258  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:21:58.049520  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/custom-flannel-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:22:04.445994  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/enable-default-cni-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:22:23.662948  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/flannel-284111/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:22:23.669335  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/flannel-284111/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:22:23.680716  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/flannel-284111/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:22:23.702092  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/flannel-284111/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:22:23.743447  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/flannel-284111/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:22:23.824905  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/flannel-284111/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:22:23.986473  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/flannel-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:22:24.308037  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/flannel-284111/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:22:24.950144  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/flannel-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:22:26.232112  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/flannel-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:22:28.357633  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/calico-284111/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:22:28.793552  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/flannel-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:22:33.915372  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/flannel-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:22:44.157442  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/flannel-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:22:56.061012  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/calico-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
E0127 03:23:04.639567  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/flannel-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.85:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.85:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-542356 -n old-k8s-version-542356
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-542356 -n old-k8s-version-542356: exit status 2 (240.756462ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "old-k8s-version-542356" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-542356 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context old-k8s-version-542356 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.87µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-542356 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-542356 -n old-k8s-version-542356
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-542356 -n old-k8s-version-542356: exit status 2 (227.259317ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-542356 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile    |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-284111 sudo iptables                       | bridge-284111 | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | -t nat -L -n -v                                      |               |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo                                | bridge-284111 | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | systemctl status kubelet --all                       |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo                                | bridge-284111 | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | systemctl cat kubelet                                |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo                                | bridge-284111 | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | journalctl -xeu kubelet --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo cat                            | bridge-284111 | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |               |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo cat                            | bridge-284111 | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | /var/lib/kubelet/config.yaml                         |               |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo                                | bridge-284111 | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC |                     |
	|         | systemctl status docker --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo                                | bridge-284111 | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | systemctl cat docker                                 |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo cat                            | bridge-284111 | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | /etc/docker/daemon.json                              |               |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo docker                         | bridge-284111 | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC |                     |
	|         | system info                                          |               |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo                                | bridge-284111 | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC |                     |
	|         | systemctl status cri-docker                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo                                | bridge-284111 | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | systemctl cat cri-docker                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo cat                            | bridge-284111 | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |               |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo cat                            | bridge-284111 | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |               |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo                                | bridge-284111 | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | cri-dockerd --version                                |               |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo                                | bridge-284111 | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC |                     |
	|         | systemctl status containerd                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo                                | bridge-284111 | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | systemctl cat containerd                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo cat                            | bridge-284111 | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | /lib/systemd/system/containerd.service               |               |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo cat                            | bridge-284111 | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | /etc/containerd/config.toml                          |               |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo                                | bridge-284111 | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | containerd config dump                               |               |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo                                | bridge-284111 | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | systemctl status crio --all                          |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo                                | bridge-284111 | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | systemctl cat crio --no-pager                        |               |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo find                           | bridge-284111 | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | /etc/crio -type f -exec sh -c                        |               |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |               |         |         |                     |                     |
	| ssh     | -p bridge-284111 sudo crio                           | bridge-284111 | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|         | config                                               |               |         |         |                     |                     |
	| delete  | -p bridge-284111                                     | bridge-284111 | jenkins | v1.35.0 | 27 Jan 25 03:19 UTC | 27 Jan 25 03:19 UTC |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 03:17:58
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 03:17:58.007832  965412 out.go:345] Setting OutFile to fd 1 ...
	I0127 03:17:58.008087  965412 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 03:17:58.008098  965412 out.go:358] Setting ErrFile to fd 2...
	I0127 03:17:58.008102  965412 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 03:17:58.008278  965412 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-897624/.minikube/bin
	I0127 03:17:58.008983  965412 out.go:352] Setting JSON to false
	I0127 03:17:58.010228  965412 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":14421,"bootTime":1737933457,"procs":304,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 03:17:58.010344  965412 start.go:139] virtualization: kvm guest
	I0127 03:17:58.012718  965412 out.go:177] * [bridge-284111] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 03:17:58.014083  965412 notify.go:220] Checking for updates...
	I0127 03:17:58.014104  965412 out.go:177]   - MINIKUBE_LOCATION=20316
	I0127 03:17:58.015451  965412 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 03:17:58.016768  965412 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20316-897624/kubeconfig
	I0127 03:17:58.017965  965412 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-897624/.minikube
	I0127 03:17:58.019014  965412 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 03:17:58.020110  965412 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 03:17:58.021921  965412 config.go:182] Loaded profile config "default-k8s-diff-port-150897": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 03:17:58.022085  965412 config.go:182] Loaded profile config "no-preload-844432": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 03:17:58.022217  965412 config.go:182] Loaded profile config "old-k8s-version-542356": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 03:17:58.022360  965412 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 03:17:58.061018  965412 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 03:17:58.062340  965412 start.go:297] selected driver: kvm2
	I0127 03:17:58.062361  965412 start.go:901] validating driver "kvm2" against <nil>
	I0127 03:17:58.062373  965412 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 03:17:58.063151  965412 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 03:17:58.063269  965412 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20316-897624/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 03:17:58.080150  965412 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 03:17:58.080207  965412 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 03:17:58.080475  965412 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 03:17:58.080515  965412 cni.go:84] Creating CNI manager for "bridge"
	I0127 03:17:58.080523  965412 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 03:17:58.080596  965412 start.go:340] cluster config:
	{Name:bridge-284111 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-284111 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I0127 03:17:58.080703  965412 iso.go:125] acquiring lock: {Name:mkbbe08fa3daa2372045069667a0a52e0e34abd9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 03:17:58.082659  965412 out.go:177] * Starting "bridge-284111" primary control-plane node in "bridge-284111" cluster
	I0127 03:17:58.084060  965412 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 03:17:58.084155  965412 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 03:17:58.084193  965412 cache.go:56] Caching tarball of preloaded images
	I0127 03:17:58.084317  965412 preload.go:172] Found /home/jenkins/minikube-integration/20316-897624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 03:17:58.084333  965412 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 03:17:58.084446  965412 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/config.json ...
	I0127 03:17:58.084473  965412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/config.json: {Name:mk925500efef5bfd6040ea4d63f14dacaa6ac946 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:17:58.084633  965412 start.go:360] acquireMachinesLock for bridge-284111: {Name:mke71211974c740845fb9b00041df14f4e3ecd74 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 03:17:58.084676  965412 start.go:364] duration metric: took 26.584µs to acquireMachinesLock for "bridge-284111"
	I0127 03:17:58.084703  965412 start.go:93] Provisioning new machine with config: &{Name:bridge-284111 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-284111 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 03:17:58.084799  965412 start.go:125] createHost starting for "" (driver="kvm2")
	I0127 03:17:58.086526  965412 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0127 03:17:58.086710  965412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:17:58.086766  965412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:17:58.103582  965412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34573
	I0127 03:17:58.104096  965412 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:17:58.104674  965412 main.go:141] libmachine: Using API Version  1
	I0127 03:17:58.104697  965412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:17:58.105051  965412 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:17:58.105275  965412 main.go:141] libmachine: (bridge-284111) Calling .GetMachineName
	I0127 03:17:58.105440  965412 main.go:141] libmachine: (bridge-284111) Calling .DriverName
	I0127 03:17:58.105583  965412 start.go:159] libmachine.API.Create for "bridge-284111" (driver="kvm2")
	I0127 03:17:58.105618  965412 client.go:168] LocalClient.Create starting
	I0127 03:17:58.105657  965412 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem
	I0127 03:17:58.105689  965412 main.go:141] libmachine: Decoding PEM data...
	I0127 03:17:58.105706  965412 main.go:141] libmachine: Parsing certificate...
	I0127 03:17:58.105761  965412 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20316-897624/.minikube/certs/cert.pem
	I0127 03:17:58.105784  965412 main.go:141] libmachine: Decoding PEM data...
	I0127 03:17:58.105804  965412 main.go:141] libmachine: Parsing certificate...
	I0127 03:17:58.105828  965412 main.go:141] libmachine: Running pre-create checks...
	I0127 03:17:58.105836  965412 main.go:141] libmachine: (bridge-284111) Calling .PreCreateCheck
	I0127 03:17:58.106286  965412 main.go:141] libmachine: (bridge-284111) Calling .GetConfigRaw
	I0127 03:17:58.106758  965412 main.go:141] libmachine: Creating machine...
	I0127 03:17:58.106773  965412 main.go:141] libmachine: (bridge-284111) Calling .Create
	I0127 03:17:58.106921  965412 main.go:141] libmachine: (bridge-284111) creating KVM machine...
	I0127 03:17:58.106938  965412 main.go:141] libmachine: (bridge-284111) creating network...
	I0127 03:17:58.108340  965412 main.go:141] libmachine: (bridge-284111) DBG | found existing default KVM network
	I0127 03:17:58.109981  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:17:58.109804  965435 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:9e:80:59} reservation:<nil>}
	I0127 03:17:58.111324  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:17:58.111241  965435 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:27:c5:54} reservation:<nil>}
	I0127 03:17:58.112864  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:17:58.112772  965435 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000386960}
	I0127 03:17:58.112921  965412 main.go:141] libmachine: (bridge-284111) DBG | created network xml: 
	I0127 03:17:58.112965  965412 main.go:141] libmachine: (bridge-284111) DBG | <network>
	I0127 03:17:58.112982  965412 main.go:141] libmachine: (bridge-284111) DBG |   <name>mk-bridge-284111</name>
	I0127 03:17:58.112994  965412 main.go:141] libmachine: (bridge-284111) DBG |   <dns enable='no'/>
	I0127 03:17:58.113003  965412 main.go:141] libmachine: (bridge-284111) DBG |   
	I0127 03:17:58.113012  965412 main.go:141] libmachine: (bridge-284111) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0127 03:17:58.113026  965412 main.go:141] libmachine: (bridge-284111) DBG |     <dhcp>
	I0127 03:17:58.113039  965412 main.go:141] libmachine: (bridge-284111) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0127 03:17:58.113049  965412 main.go:141] libmachine: (bridge-284111) DBG |     </dhcp>
	I0127 03:17:58.113065  965412 main.go:141] libmachine: (bridge-284111) DBG |   </ip>
	I0127 03:17:58.113087  965412 main.go:141] libmachine: (bridge-284111) DBG |   
	I0127 03:17:58.113098  965412 main.go:141] libmachine: (bridge-284111) DBG | </network>
	I0127 03:17:58.113108  965412 main.go:141] libmachine: (bridge-284111) DBG | 
	I0127 03:17:58.118866  965412 main.go:141] libmachine: (bridge-284111) DBG | trying to create private KVM network mk-bridge-284111 192.168.61.0/24...
	I0127 03:17:58.193944  965412 main.go:141] libmachine: (bridge-284111) DBG | private KVM network mk-bridge-284111 192.168.61.0/24 created
	I0127 03:17:58.194004  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:17:58.193927  965435 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20316-897624/.minikube
	I0127 03:17:58.194017  965412 main.go:141] libmachine: (bridge-284111) setting up store path in /home/jenkins/minikube-integration/20316-897624/.minikube/machines/bridge-284111 ...
	I0127 03:17:58.194041  965412 main.go:141] libmachine: (bridge-284111) building disk image from file:///home/jenkins/minikube-integration/20316-897624/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 03:17:58.194060  965412 main.go:141] libmachine: (bridge-284111) Downloading /home/jenkins/minikube-integration/20316-897624/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20316-897624/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0127 03:17:58.491014  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:17:58.490850  965435 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/bridge-284111/id_rsa...
	I0127 03:17:58.742092  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:17:58.741934  965435 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/bridge-284111/bridge-284111.rawdisk...
	I0127 03:17:58.742129  965412 main.go:141] libmachine: (bridge-284111) DBG | Writing magic tar header
	I0127 03:17:58.742144  965412 main.go:141] libmachine: (bridge-284111) DBG | Writing SSH key tar header
	I0127 03:17:58.742157  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:17:58.742067  965435 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20316-897624/.minikube/machines/bridge-284111 ...
	I0127 03:17:58.742170  965412 main.go:141] libmachine: (bridge-284111) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/bridge-284111
	I0127 03:17:58.742179  965412 main.go:141] libmachine: (bridge-284111) setting executable bit set on /home/jenkins/minikube-integration/20316-897624/.minikube/machines/bridge-284111 (perms=drwx------)
	I0127 03:17:58.742193  965412 main.go:141] libmachine: (bridge-284111) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20316-897624/.minikube/machines
	I0127 03:17:58.742211  965412 main.go:141] libmachine: (bridge-284111) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20316-897624/.minikube
	I0127 03:17:58.742226  965412 main.go:141] libmachine: (bridge-284111) setting executable bit set on /home/jenkins/minikube-integration/20316-897624/.minikube/machines (perms=drwxr-xr-x)
	I0127 03:17:58.742240  965412 main.go:141] libmachine: (bridge-284111) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20316-897624
	I0127 03:17:58.742254  965412 main.go:141] libmachine: (bridge-284111) setting executable bit set on /home/jenkins/minikube-integration/20316-897624/.minikube (perms=drwxr-xr-x)
	I0127 03:17:58.742267  965412 main.go:141] libmachine: (bridge-284111) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0127 03:17:58.742281  965412 main.go:141] libmachine: (bridge-284111) DBG | checking permissions on dir: /home/jenkins
	I0127 03:17:58.742293  965412 main.go:141] libmachine: (bridge-284111) DBG | checking permissions on dir: /home
	I0127 03:17:58.742307  965412 main.go:141] libmachine: (bridge-284111) setting executable bit set on /home/jenkins/minikube-integration/20316-897624 (perms=drwxrwxr-x)
	I0127 03:17:58.742319  965412 main.go:141] libmachine: (bridge-284111) DBG | skipping /home - not owner
	I0127 03:17:58.742332  965412 main.go:141] libmachine: (bridge-284111) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0127 03:17:58.742346  965412 main.go:141] libmachine: (bridge-284111) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0127 03:17:58.742355  965412 main.go:141] libmachine: (bridge-284111) creating domain...
	I0127 03:17:58.743737  965412 main.go:141] libmachine: (bridge-284111) define libvirt domain using xml: 
	I0127 03:17:58.743768  965412 main.go:141] libmachine: (bridge-284111) <domain type='kvm'>
	I0127 03:17:58.743795  965412 main.go:141] libmachine: (bridge-284111)   <name>bridge-284111</name>
	I0127 03:17:58.743805  965412 main.go:141] libmachine: (bridge-284111)   <memory unit='MiB'>3072</memory>
	I0127 03:17:58.743811  965412 main.go:141] libmachine: (bridge-284111)   <vcpu>2</vcpu>
	I0127 03:17:58.743818  965412 main.go:141] libmachine: (bridge-284111)   <features>
	I0127 03:17:58.743824  965412 main.go:141] libmachine: (bridge-284111)     <acpi/>
	I0127 03:17:58.743831  965412 main.go:141] libmachine: (bridge-284111)     <apic/>
	I0127 03:17:58.743836  965412 main.go:141] libmachine: (bridge-284111)     <pae/>
	I0127 03:17:58.743843  965412 main.go:141] libmachine: (bridge-284111)     
	I0127 03:17:58.743860  965412 main.go:141] libmachine: (bridge-284111)   </features>
	I0127 03:17:58.743868  965412 main.go:141] libmachine: (bridge-284111)   <cpu mode='host-passthrough'>
	I0127 03:17:58.743872  965412 main.go:141] libmachine: (bridge-284111)   
	I0127 03:17:58.743877  965412 main.go:141] libmachine: (bridge-284111)   </cpu>
	I0127 03:17:58.743916  965412 main.go:141] libmachine: (bridge-284111)   <os>
	I0127 03:17:58.743943  965412 main.go:141] libmachine: (bridge-284111)     <type>hvm</type>
	I0127 03:17:58.743960  965412 main.go:141] libmachine: (bridge-284111)     <boot dev='cdrom'/>
	I0127 03:17:58.743978  965412 main.go:141] libmachine: (bridge-284111)     <boot dev='hd'/>
	I0127 03:17:58.743991  965412 main.go:141] libmachine: (bridge-284111)     <bootmenu enable='no'/>
	I0127 03:17:58.744000  965412 main.go:141] libmachine: (bridge-284111)   </os>
	I0127 03:17:58.744011  965412 main.go:141] libmachine: (bridge-284111)   <devices>
	I0127 03:17:58.744022  965412 main.go:141] libmachine: (bridge-284111)     <disk type='file' device='cdrom'>
	I0127 03:17:58.744037  965412 main.go:141] libmachine: (bridge-284111)       <source file='/home/jenkins/minikube-integration/20316-897624/.minikube/machines/bridge-284111/boot2docker.iso'/>
	I0127 03:17:58.744049  965412 main.go:141] libmachine: (bridge-284111)       <target dev='hdc' bus='scsi'/>
	I0127 03:17:58.744056  965412 main.go:141] libmachine: (bridge-284111)       <readonly/>
	I0127 03:17:58.744068  965412 main.go:141] libmachine: (bridge-284111)     </disk>
	I0127 03:17:58.744079  965412 main.go:141] libmachine: (bridge-284111)     <disk type='file' device='disk'>
	I0127 03:17:58.744092  965412 main.go:141] libmachine: (bridge-284111)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0127 03:17:58.744106  965412 main.go:141] libmachine: (bridge-284111)       <source file='/home/jenkins/minikube-integration/20316-897624/.minikube/machines/bridge-284111/bridge-284111.rawdisk'/>
	I0127 03:17:58.744119  965412 main.go:141] libmachine: (bridge-284111)       <target dev='hda' bus='virtio'/>
	I0127 03:17:58.744129  965412 main.go:141] libmachine: (bridge-284111)     </disk>
	I0127 03:17:58.744147  965412 main.go:141] libmachine: (bridge-284111)     <interface type='network'>
	I0127 03:17:58.744166  965412 main.go:141] libmachine: (bridge-284111)       <source network='mk-bridge-284111'/>
	I0127 03:17:58.744177  965412 main.go:141] libmachine: (bridge-284111)       <model type='virtio'/>
	I0127 03:17:58.744181  965412 main.go:141] libmachine: (bridge-284111)     </interface>
	I0127 03:17:58.744188  965412 main.go:141] libmachine: (bridge-284111)     <interface type='network'>
	I0127 03:17:58.744199  965412 main.go:141] libmachine: (bridge-284111)       <source network='default'/>
	I0127 03:17:58.744209  965412 main.go:141] libmachine: (bridge-284111)       <model type='virtio'/>
	I0127 03:17:58.744220  965412 main.go:141] libmachine: (bridge-284111)     </interface>
	I0127 03:17:58.744237  965412 main.go:141] libmachine: (bridge-284111)     <serial type='pty'>
	I0127 03:17:58.744254  965412 main.go:141] libmachine: (bridge-284111)       <target port='0'/>
	I0127 03:17:58.744267  965412 main.go:141] libmachine: (bridge-284111)     </serial>
	I0127 03:17:58.744277  965412 main.go:141] libmachine: (bridge-284111)     <console type='pty'>
	I0127 03:17:58.744286  965412 main.go:141] libmachine: (bridge-284111)       <target type='serial' port='0'/>
	I0127 03:17:58.744295  965412 main.go:141] libmachine: (bridge-284111)     </console>
	I0127 03:17:58.744304  965412 main.go:141] libmachine: (bridge-284111)     <rng model='virtio'>
	I0127 03:17:58.744320  965412 main.go:141] libmachine: (bridge-284111)       <backend model='random'>/dev/random</backend>
	I0127 03:17:58.744330  965412 main.go:141] libmachine: (bridge-284111)     </rng>
	I0127 03:17:58.744339  965412 main.go:141] libmachine: (bridge-284111)     
	I0127 03:17:58.744352  965412 main.go:141] libmachine: (bridge-284111)     
	I0127 03:17:58.744383  965412 main.go:141] libmachine: (bridge-284111)   </devices>
	I0127 03:17:58.744399  965412 main.go:141] libmachine: (bridge-284111) </domain>
	I0127 03:17:58.744433  965412 main.go:141] libmachine: (bridge-284111) 
	I0127 03:17:58.748565  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b5:a5:4c in network default
	I0127 03:17:58.749275  965412 main.go:141] libmachine: (bridge-284111) starting domain...
	I0127 03:17:58.749295  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:17:58.749303  965412 main.go:141] libmachine: (bridge-284111) ensuring networks are active...
	I0127 03:17:58.750055  965412 main.go:141] libmachine: (bridge-284111) Ensuring network default is active
	I0127 03:17:58.750412  965412 main.go:141] libmachine: (bridge-284111) Ensuring network mk-bridge-284111 is active
	I0127 03:17:58.750915  965412 main.go:141] libmachine: (bridge-284111) getting domain XML...
	I0127 03:17:58.751662  965412 main.go:141] libmachine: (bridge-284111) creating domain...
	I0127 03:18:00.015025  965412 main.go:141] libmachine: (bridge-284111) waiting for IP...
	I0127 03:18:00.016519  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:00.017082  965412 main.go:141] libmachine: (bridge-284111) DBG | unable to find current IP address of domain bridge-284111 in network mk-bridge-284111
	I0127 03:18:00.017146  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:18:00.017069  965435 retry.go:31] will retry after 296.46937ms: waiting for domain to come up
	I0127 03:18:00.315605  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:00.316275  965412 main.go:141] libmachine: (bridge-284111) DBG | unable to find current IP address of domain bridge-284111 in network mk-bridge-284111
	I0127 03:18:00.316335  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:18:00.316255  965435 retry.go:31] will retry after 324.587633ms: waiting for domain to come up
	I0127 03:18:00.642896  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:00.643504  965412 main.go:141] libmachine: (bridge-284111) DBG | unable to find current IP address of domain bridge-284111 in network mk-bridge-284111
	I0127 03:18:00.643533  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:18:00.643463  965435 retry.go:31] will retry after 310.207491ms: waiting for domain to come up
	I0127 03:18:00.955258  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:00.955855  965412 main.go:141] libmachine: (bridge-284111) DBG | unable to find current IP address of domain bridge-284111 in network mk-bridge-284111
	I0127 03:18:00.955900  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:18:00.955817  965435 retry.go:31] will retry after 446.485588ms: waiting for domain to come up
	I0127 03:18:01.403690  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:01.404190  965412 main.go:141] libmachine: (bridge-284111) DBG | unable to find current IP address of domain bridge-284111 in network mk-bridge-284111
	I0127 03:18:01.404213  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:18:01.404170  965435 retry.go:31] will retry after 582.778524ms: waiting for domain to come up
	I0127 03:18:01.988986  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:01.989525  965412 main.go:141] libmachine: (bridge-284111) DBG | unable to find current IP address of domain bridge-284111 in network mk-bridge-284111
	I0127 03:18:01.989575  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:18:01.989493  965435 retry.go:31] will retry after 794.193078ms: waiting for domain to come up
	I0127 03:18:02.784888  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:02.785367  965412 main.go:141] libmachine: (bridge-284111) DBG | unable to find current IP address of domain bridge-284111 in network mk-bridge-284111
	I0127 03:18:02.785398  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:18:02.785331  965435 retry.go:31] will retry after 750.185481ms: waiting for domain to come up
	I0127 03:18:03.536841  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:03.537466  965412 main.go:141] libmachine: (bridge-284111) DBG | unable to find current IP address of domain bridge-284111 in network mk-bridge-284111
	I0127 03:18:03.537489  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:18:03.537438  965435 retry.go:31] will retry after 1.167158008s: waiting for domain to come up
	I0127 03:18:04.706731  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:04.707283  965412 main.go:141] libmachine: (bridge-284111) DBG | unable to find current IP address of domain bridge-284111 in network mk-bridge-284111
	I0127 03:18:04.707309  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:18:04.707258  965435 retry.go:31] will retry after 1.775191002s: waiting for domain to come up
	I0127 03:18:06.485130  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:06.485646  965412 main.go:141] libmachine: (bridge-284111) DBG | unable to find current IP address of domain bridge-284111 in network mk-bridge-284111
	I0127 03:18:06.485667  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:18:06.485615  965435 retry.go:31] will retry after 1.448139158s: waiting for domain to come up
	I0127 03:18:07.935272  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:07.935916  965412 main.go:141] libmachine: (bridge-284111) DBG | unable to find current IP address of domain bridge-284111 in network mk-bridge-284111
	I0127 03:18:07.935951  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:18:07.935874  965435 retry.go:31] will retry after 1.937800559s: waiting for domain to come up
	I0127 03:18:09.876527  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:09.877179  965412 main.go:141] libmachine: (bridge-284111) DBG | unable to find current IP address of domain bridge-284111 in network mk-bridge-284111
	I0127 03:18:09.877209  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:18:09.877127  965435 retry.go:31] will retry after 3.510411188s: waiting for domain to come up
	I0127 03:18:13.388796  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:13.389263  965412 main.go:141] libmachine: (bridge-284111) DBG | unable to find current IP address of domain bridge-284111 in network mk-bridge-284111
	I0127 03:18:13.389312  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:18:13.389227  965435 retry.go:31] will retry after 2.812768495s: waiting for domain to come up
	I0127 03:18:16.203115  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:16.203663  965412 main.go:141] libmachine: (bridge-284111) DBG | unable to find current IP address of domain bridge-284111 in network mk-bridge-284111
	I0127 03:18:16.203687  965412 main.go:141] libmachine: (bridge-284111) DBG | I0127 03:18:16.203637  965435 retry.go:31] will retry after 5.220368337s: waiting for domain to come up
	I0127 03:18:21.428631  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:21.429297  965412 main.go:141] libmachine: (bridge-284111) found domain IP: 192.168.61.178
	I0127 03:18:21.429319  965412 main.go:141] libmachine: (bridge-284111) reserving static IP address...
	I0127 03:18:21.429334  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has current primary IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:21.429752  965412 main.go:141] libmachine: (bridge-284111) DBG | unable to find host DHCP lease matching {name: "bridge-284111", mac: "52:54:00:b1:5c:91", ip: "192.168.61.178"} in network mk-bridge-284111
	I0127 03:18:21.509966  965412 main.go:141] libmachine: (bridge-284111) reserved static IP address 192.168.61.178 for domain bridge-284111
	I0127 03:18:21.509994  965412 main.go:141] libmachine: (bridge-284111) waiting for SSH...
	I0127 03:18:21.510014  965412 main.go:141] libmachine: (bridge-284111) DBG | Getting to WaitForSSH function...
	I0127 03:18:21.512978  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:21.513493  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:21.513526  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:21.513707  965412 main.go:141] libmachine: (bridge-284111) DBG | Using SSH client type: external
	I0127 03:18:21.513738  965412 main.go:141] libmachine: (bridge-284111) DBG | Using SSH private key: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/bridge-284111/id_rsa (-rw-------)
	I0127 03:18:21.513787  965412 main.go:141] libmachine: (bridge-284111) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.178 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20316-897624/.minikube/machines/bridge-284111/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 03:18:21.513808  965412 main.go:141] libmachine: (bridge-284111) DBG | About to run SSH command:
	I0127 03:18:21.513827  965412 main.go:141] libmachine: (bridge-284111) DBG | exit 0
	I0127 03:18:21.644785  965412 main.go:141] libmachine: (bridge-284111) DBG | SSH cmd err, output: <nil>: 
	I0127 03:18:21.645052  965412 main.go:141] libmachine: (bridge-284111) KVM machine creation complete
	I0127 03:18:21.645355  965412 main.go:141] libmachine: (bridge-284111) Calling .GetConfigRaw
	I0127 03:18:21.645965  965412 main.go:141] libmachine: (bridge-284111) Calling .DriverName
	I0127 03:18:21.646190  965412 main.go:141] libmachine: (bridge-284111) Calling .DriverName
	I0127 03:18:21.646360  965412 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0127 03:18:21.646375  965412 main.go:141] libmachine: (bridge-284111) Calling .GetState
	I0127 03:18:21.647746  965412 main.go:141] libmachine: Detecting operating system of created instance...
	I0127 03:18:21.647759  965412 main.go:141] libmachine: Waiting for SSH to be available...
	I0127 03:18:21.647764  965412 main.go:141] libmachine: Getting to WaitForSSH function...
	I0127 03:18:21.647770  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHHostname
	I0127 03:18:21.650013  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:21.650350  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:21.650389  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:21.650556  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHPort
	I0127 03:18:21.650778  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:21.650971  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:21.651160  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHUsername
	I0127 03:18:21.651399  965412 main.go:141] libmachine: Using SSH client type: native
	I0127 03:18:21.651690  965412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0127 03:18:21.651705  965412 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0127 03:18:21.764222  965412 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 03:18:21.764246  965412 main.go:141] libmachine: Detecting the provisioner...
	I0127 03:18:21.764254  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHHostname
	I0127 03:18:21.767309  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:21.767688  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:21.767729  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:21.767918  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHPort
	I0127 03:18:21.768152  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:21.768332  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:21.768482  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHUsername
	I0127 03:18:21.768638  965412 main.go:141] libmachine: Using SSH client type: native
	I0127 03:18:21.768838  965412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0127 03:18:21.768853  965412 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0127 03:18:21.881643  965412 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0127 03:18:21.881735  965412 main.go:141] libmachine: found compatible host: buildroot
	I0127 03:18:21.881746  965412 main.go:141] libmachine: Provisioning with buildroot...
	I0127 03:18:21.881753  965412 main.go:141] libmachine: (bridge-284111) Calling .GetMachineName
	I0127 03:18:21.881975  965412 buildroot.go:166] provisioning hostname "bridge-284111"
	I0127 03:18:21.881988  965412 main.go:141] libmachine: (bridge-284111) Calling .GetMachineName
	I0127 03:18:21.882114  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHHostname
	I0127 03:18:21.885113  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:21.885480  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:21.885512  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:21.885630  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHPort
	I0127 03:18:21.885871  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:21.886021  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:21.886238  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHUsername
	I0127 03:18:21.886376  965412 main.go:141] libmachine: Using SSH client type: native
	I0127 03:18:21.886540  965412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0127 03:18:21.886551  965412 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-284111 && echo "bridge-284111" | sudo tee /etc/hostname
	I0127 03:18:22.015776  965412 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-284111
	
	I0127 03:18:22.015808  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHHostname
	I0127 03:18:22.018986  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.019331  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:22.019361  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.019548  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHPort
	I0127 03:18:22.019766  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:22.019970  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:22.020119  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHUsername
	I0127 03:18:22.020270  965412 main.go:141] libmachine: Using SSH client type: native
	I0127 03:18:22.020473  965412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0127 03:18:22.020500  965412 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-284111' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-284111/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-284111' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 03:18:22.149637  965412 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 03:18:22.149671  965412 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20316-897624/.minikube CaCertPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20316-897624/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20316-897624/.minikube}
	I0127 03:18:22.149726  965412 buildroot.go:174] setting up certificates
	I0127 03:18:22.149746  965412 provision.go:84] configureAuth start
	I0127 03:18:22.149765  965412 main.go:141] libmachine: (bridge-284111) Calling .GetMachineName
	I0127 03:18:22.150087  965412 main.go:141] libmachine: (bridge-284111) Calling .GetIP
	I0127 03:18:22.153181  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.153482  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:22.153504  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.153707  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHHostname
	I0127 03:18:22.156418  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.156825  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:22.156858  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.157060  965412 provision.go:143] copyHostCerts
	I0127 03:18:22.157140  965412 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-897624/.minikube/ca.pem, removing ...
	I0127 03:18:22.157153  965412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-897624/.minikube/ca.pem
	I0127 03:18:22.157243  965412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20316-897624/.minikube/ca.pem (1078 bytes)
	I0127 03:18:22.157355  965412 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-897624/.minikube/cert.pem, removing ...
	I0127 03:18:22.157366  965412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-897624/.minikube/cert.pem
	I0127 03:18:22.157404  965412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20316-897624/.minikube/cert.pem (1123 bytes)
	I0127 03:18:22.157496  965412 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-897624/.minikube/key.pem, removing ...
	I0127 03:18:22.157506  965412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-897624/.minikube/key.pem
	I0127 03:18:22.157546  965412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20316-897624/.minikube/key.pem (1679 bytes)
	I0127 03:18:22.157616  965412 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20316-897624/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca-key.pem org=jenkins.bridge-284111 san=[127.0.0.1 192.168.61.178 bridge-284111 localhost minikube]
	I0127 03:18:22.340623  965412 provision.go:177] copyRemoteCerts
	I0127 03:18:22.340707  965412 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 03:18:22.340739  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHHostname
	I0127 03:18:22.343784  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.344187  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:22.344219  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.344432  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHPort
	I0127 03:18:22.344616  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:22.344750  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHUsername
	I0127 03:18:22.344872  965412 sshutil.go:53] new ssh client: &{IP:192.168.61.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/bridge-284111/id_rsa Username:docker}
	I0127 03:18:22.435531  965412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 03:18:22.459380  965412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 03:18:22.481955  965412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0127 03:18:22.504297  965412 provision.go:87] duration metric: took 354.53072ms to configureAuth
	I0127 03:18:22.504340  965412 buildroot.go:189] setting minikube options for container-runtime
	I0127 03:18:22.504542  965412 config.go:182] Loaded profile config "bridge-284111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 03:18:22.504637  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHHostname
	I0127 03:18:22.507527  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.507981  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:22.508014  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.508272  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHPort
	I0127 03:18:22.508518  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:22.508696  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:22.508867  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHUsername
	I0127 03:18:22.509083  965412 main.go:141] libmachine: Using SSH client type: native
	I0127 03:18:22.509321  965412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0127 03:18:22.509344  965412 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 03:18:22.745255  965412 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 03:18:22.745289  965412 main.go:141] libmachine: Checking connection to Docker...
	I0127 03:18:22.745298  965412 main.go:141] libmachine: (bridge-284111) Calling .GetURL
	I0127 03:18:22.746733  965412 main.go:141] libmachine: (bridge-284111) DBG | using libvirt version 6000000
	I0127 03:18:22.748816  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.749210  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:22.749235  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.749452  965412 main.go:141] libmachine: Docker is up and running!
	I0127 03:18:22.749468  965412 main.go:141] libmachine: Reticulating splines...
	I0127 03:18:22.749477  965412 client.go:171] duration metric: took 24.643847103s to LocalClient.Create
	I0127 03:18:22.749501  965412 start.go:167] duration metric: took 24.643920715s to libmachine.API.Create "bridge-284111"
	I0127 03:18:22.749510  965412 start.go:293] postStartSetup for "bridge-284111" (driver="kvm2")
	I0127 03:18:22.749521  965412 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 03:18:22.749538  965412 main.go:141] libmachine: (bridge-284111) Calling .DriverName
	I0127 03:18:22.749766  965412 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 03:18:22.749791  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHHostname
	I0127 03:18:22.752050  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.752455  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:22.752481  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.752670  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHPort
	I0127 03:18:22.752875  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:22.753046  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHUsername
	I0127 03:18:22.753209  965412 sshutil.go:53] new ssh client: &{IP:192.168.61.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/bridge-284111/id_rsa Username:docker}
	I0127 03:18:22.838649  965412 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 03:18:22.842594  965412 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 03:18:22.842623  965412 filesync.go:126] Scanning /home/jenkins/minikube-integration/20316-897624/.minikube/addons for local assets ...
	I0127 03:18:22.842702  965412 filesync.go:126] Scanning /home/jenkins/minikube-integration/20316-897624/.minikube/files for local assets ...
	I0127 03:18:22.842811  965412 filesync.go:149] local asset: /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem -> 9048892.pem in /etc/ssl/certs
	I0127 03:18:22.842925  965412 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 03:18:22.851615  965412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem --> /etc/ssl/certs/9048892.pem (1708 bytes)
	I0127 03:18:22.873576  965412 start.go:296] duration metric: took 124.051614ms for postStartSetup
	I0127 03:18:22.873628  965412 main.go:141] libmachine: (bridge-284111) Calling .GetConfigRaw
	I0127 03:18:22.874263  965412 main.go:141] libmachine: (bridge-284111) Calling .GetIP
	I0127 03:18:22.877366  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.877690  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:22.877717  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.877984  965412 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/config.json ...
	I0127 03:18:22.878205  965412 start.go:128] duration metric: took 24.793394051s to createHost
	I0127 03:18:22.878230  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHHostname
	I0127 03:18:22.880656  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.881029  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:22.881057  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.881273  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHPort
	I0127 03:18:22.881451  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:22.881617  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:22.881735  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHUsername
	I0127 03:18:22.881878  965412 main.go:141] libmachine: Using SSH client type: native
	I0127 03:18:22.882070  965412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0127 03:18:22.882081  965412 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 03:18:22.993428  965412 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737947902.961069921
	
	I0127 03:18:22.993452  965412 fix.go:216] guest clock: 1737947902.961069921
	I0127 03:18:22.993459  965412 fix.go:229] Guest: 2025-01-27 03:18:22.961069921 +0000 UTC Remote: 2025-01-27 03:18:22.878219801 +0000 UTC m=+24.911173814 (delta=82.85012ms)
	I0127 03:18:22.993480  965412 fix.go:200] guest clock delta is within tolerance: 82.85012ms
	I0127 03:18:22.993486  965412 start.go:83] releasing machines lock for "bridge-284111", held for 24.908799324s
	I0127 03:18:22.993504  965412 main.go:141] libmachine: (bridge-284111) Calling .DriverName
	I0127 03:18:22.993771  965412 main.go:141] libmachine: (bridge-284111) Calling .GetIP
	I0127 03:18:22.996377  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.996721  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:22.996743  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:22.996876  965412 main.go:141] libmachine: (bridge-284111) Calling .DriverName
	I0127 03:18:22.997362  965412 main.go:141] libmachine: (bridge-284111) Calling .DriverName
	I0127 03:18:22.997554  965412 main.go:141] libmachine: (bridge-284111) Calling .DriverName
	I0127 03:18:22.997692  965412 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 03:18:22.997726  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHHostname
	I0127 03:18:22.997831  965412 ssh_runner.go:195] Run: cat /version.json
	I0127 03:18:22.997879  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHHostname
	I0127 03:18:23.000390  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:23.000715  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:23.000748  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:23.000765  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:23.000835  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHPort
	I0127 03:18:23.001133  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:23.001212  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:23.001255  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:23.001296  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHUsername
	I0127 03:18:23.001383  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHPort
	I0127 03:18:23.001468  965412 sshutil.go:53] new ssh client: &{IP:192.168.61.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/bridge-284111/id_rsa Username:docker}
	I0127 03:18:23.001516  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:23.001641  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHUsername
	I0127 03:18:23.001749  965412 sshutil.go:53] new ssh client: &{IP:192.168.61.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/bridge-284111/id_rsa Username:docker}
	I0127 03:18:23.082154  965412 ssh_runner.go:195] Run: systemctl --version
	I0127 03:18:23.117345  965412 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 03:18:23.273868  965412 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 03:18:23.280724  965412 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 03:18:23.280787  965412 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 03:18:23.296482  965412 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 03:18:23.296511  965412 start.go:495] detecting cgroup driver to use...
	I0127 03:18:23.296594  965412 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 03:18:23.311864  965412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 03:18:23.326213  965412 docker.go:217] disabling cri-docker service (if available) ...
	I0127 03:18:23.326279  965412 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 03:18:23.340218  965412 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 03:18:23.354322  965412 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 03:18:23.476775  965412 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 03:18:23.639888  965412 docker.go:233] disabling docker service ...
	I0127 03:18:23.639952  965412 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 03:18:23.654213  965412 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 03:18:23.666393  965412 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 03:18:23.791691  965412 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 03:18:23.913216  965412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 03:18:23.928195  965412 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 03:18:23.946645  965412 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 03:18:23.946719  965412 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:18:23.956606  965412 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 03:18:23.956669  965412 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:18:23.966456  965412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:18:23.975900  965412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:18:23.985665  965412 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 03:18:23.996373  965412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:18:24.005997  965412 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:18:24.022695  965412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 03:18:24.032296  965412 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 03:18:24.041565  965412 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 03:18:24.041627  965412 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 03:18:24.054330  965412 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 03:18:24.064064  965412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 03:18:24.182330  965412 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 03:18:24.274584  965412 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 03:18:24.274671  965412 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 03:18:24.279679  965412 start.go:563] Will wait 60s for crictl version
	I0127 03:18:24.279736  965412 ssh_runner.go:195] Run: which crictl
	I0127 03:18:24.283480  965412 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 03:18:24.325459  965412 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 03:18:24.325556  965412 ssh_runner.go:195] Run: crio --version
	I0127 03:18:24.358736  965412 ssh_runner.go:195] Run: crio --version
	I0127 03:18:24.389379  965412 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 03:18:24.390675  965412 main.go:141] libmachine: (bridge-284111) Calling .GetIP
	I0127 03:18:24.393731  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:24.394168  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:24.394201  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:24.394421  965412 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0127 03:18:24.398415  965412 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 03:18:24.413708  965412 kubeadm.go:883] updating cluster {Name:bridge-284111 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-284111 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.61.178 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 03:18:24.413840  965412 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 03:18:24.413899  965412 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 03:18:24.444435  965412 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0127 03:18:24.444515  965412 ssh_runner.go:195] Run: which lz4
	I0127 03:18:24.448257  965412 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 03:18:24.451999  965412 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 03:18:24.452038  965412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0127 03:18:25.746010  965412 crio.go:462] duration metric: took 1.297780518s to copy over tarball
	I0127 03:18:25.746099  965412 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 03:18:28.004354  965412 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.258210919s)
	I0127 03:18:28.004393  965412 crio.go:469] duration metric: took 2.258349498s to extract the tarball
	I0127 03:18:28.004404  965412 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 03:18:28.043277  965412 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 03:18:28.083196  965412 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 03:18:28.083221  965412 cache_images.go:84] Images are preloaded, skipping loading
	I0127 03:18:28.083229  965412 kubeadm.go:934] updating node { 192.168.61.178 8443 v1.32.1 crio true true} ...
	I0127 03:18:28.083347  965412 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-284111 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.178
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:bridge-284111 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0127 03:18:28.083435  965412 ssh_runner.go:195] Run: crio config
	I0127 03:18:28.136532  965412 cni.go:84] Creating CNI manager for "bridge"
	I0127 03:18:28.136559  965412 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 03:18:28.136582  965412 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.178 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-284111 NodeName:bridge-284111 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.178"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.178 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 03:18:28.136722  965412 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.178
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-284111"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.178"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.178"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 03:18:28.136785  965412 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 03:18:28.148059  965412 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 03:18:28.148148  965412 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 03:18:28.159212  965412 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0127 03:18:28.177174  965412 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 03:18:28.194607  965412 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I0127 03:18:28.212099  965412 ssh_runner.go:195] Run: grep 192.168.61.178	control-plane.minikube.internal$ /etc/hosts
	I0127 03:18:28.216059  965412 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.178	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 03:18:28.229417  965412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 03:18:28.371410  965412 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 03:18:28.389537  965412 certs.go:68] Setting up /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111 for IP: 192.168.61.178
	I0127 03:18:28.389563  965412 certs.go:194] generating shared ca certs ...
	I0127 03:18:28.389583  965412 certs.go:226] acquiring lock for ca certs: {Name:mk2a0a86c4d196cb3cdcd1c836209954479044e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:18:28.389758  965412 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20316-897624/.minikube/ca.key
	I0127 03:18:28.389807  965412 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20316-897624/.minikube/proxy-client-ca.key
	I0127 03:18:28.389843  965412 certs.go:256] generating profile certs ...
	I0127 03:18:28.389921  965412 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/client.key
	I0127 03:18:28.389966  965412 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/client.crt with IP's: []
	I0127 03:18:28.445000  965412 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/client.crt ...
	I0127 03:18:28.445033  965412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/client.crt: {Name:mk9e7d9c51cfe9365fde4974dd819fc8a0bc2c44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:18:28.445242  965412 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/client.key ...
	I0127 03:18:28.445257  965412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/client.key: {Name:mk894eba5407f86f4d0ac29f6591849b258437b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:18:28.445372  965412 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/apiserver.key.803117fd
	I0127 03:18:28.445393  965412 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/apiserver.crt.803117fd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.178]
	I0127 03:18:28.526577  965412 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/apiserver.crt.803117fd ...
	I0127 03:18:28.526609  965412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/apiserver.crt.803117fd: {Name:mk6aec7505a30c2d0a25e9e0af381fa28e034b4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:18:28.527301  965412 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/apiserver.key.803117fd ...
	I0127 03:18:28.527321  965412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/apiserver.key.803117fd: {Name:mka5254c805742e5a010001442cf41b9cd6eb55d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:18:28.527419  965412 certs.go:381] copying /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/apiserver.crt.803117fd -> /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/apiserver.crt
	I0127 03:18:28.527506  965412 certs.go:385] copying /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/apiserver.key.803117fd -> /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/apiserver.key
	I0127 03:18:28.527579  965412 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/proxy-client.key
	I0127 03:18:28.527604  965412 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/proxy-client.crt with IP's: []
	I0127 03:18:28.748033  965412 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/proxy-client.crt ...
	I0127 03:18:28.748067  965412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/proxy-client.crt: {Name:mk5216cbd26d0be2d45e0038f200d35e4ccd2e0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:18:28.748266  965412 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/proxy-client.key ...
	I0127 03:18:28.748285  965412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/proxy-client.key: {Name:mk834e366bff2ac05f8e145b0ed8884b9ec0040a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:18:28.748490  965412 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/904889.pem (1338 bytes)
	W0127 03:18:28.748541  965412 certs.go:480] ignoring /home/jenkins/minikube-integration/20316-897624/.minikube/certs/904889_empty.pem, impossibly tiny 0 bytes
	I0127 03:18:28.748557  965412 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 03:18:28.748588  965412 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/ca.pem (1078 bytes)
	I0127 03:18:28.748617  965412 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/cert.pem (1123 bytes)
	I0127 03:18:28.748649  965412 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/certs/key.pem (1679 bytes)
	I0127 03:18:28.748699  965412 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem (1708 bytes)
	I0127 03:18:28.749391  965412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 03:18:28.774598  965412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 03:18:28.797221  965412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 03:18:28.819775  965412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 03:18:28.844206  965412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0127 03:18:28.868818  965412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 03:18:28.893782  965412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 03:18:28.918276  965412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/bridge-284111/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 03:18:28.942153  965412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 03:18:28.964770  965412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/certs/904889.pem --> /usr/share/ca-certificates/904889.pem (1338 bytes)
	I0127 03:18:28.987187  965412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/ssl/certs/9048892.pem --> /usr/share/ca-certificates/9048892.pem (1708 bytes)
	I0127 03:18:29.011066  965412 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 03:18:29.027191  965412 ssh_runner.go:195] Run: openssl version
	I0127 03:18:29.033146  965412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 03:18:29.044813  965412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 03:18:29.049334  965412 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 01:48 /usr/share/ca-certificates/minikubeCA.pem
	I0127 03:18:29.049405  965412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 03:18:29.055257  965412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 03:18:29.068772  965412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/904889.pem && ln -fs /usr/share/ca-certificates/904889.pem /etc/ssl/certs/904889.pem"
	I0127 03:18:29.083121  965412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/904889.pem
	I0127 03:18:29.087778  965412 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 01:57 /usr/share/ca-certificates/904889.pem
	I0127 03:18:29.087846  965412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/904889.pem
	I0127 03:18:29.095607  965412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/904889.pem /etc/ssl/certs/51391683.0"
	I0127 03:18:29.108404  965412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9048892.pem && ln -fs /usr/share/ca-certificates/9048892.pem /etc/ssl/certs/9048892.pem"
	I0127 03:18:29.123881  965412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9048892.pem
	I0127 03:18:29.130048  965412 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 01:57 /usr/share/ca-certificates/9048892.pem
	I0127 03:18:29.130122  965412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9048892.pem
	I0127 03:18:29.135495  965412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9048892.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 03:18:29.146435  965412 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 03:18:29.150627  965412 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 03:18:29.150696  965412 kubeadm.go:392] StartCluster: {Name:bridge-284111 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-284111 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.61.178 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 03:18:29.150795  965412 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 03:18:29.150878  965412 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 03:18:29.193528  965412 cri.go:89] found id: ""
	I0127 03:18:29.193616  965412 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 03:18:29.203514  965412 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 03:18:29.213077  965412 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 03:18:29.225040  965412 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 03:18:29.225067  965412 kubeadm.go:157] found existing configuration files:
	
	I0127 03:18:29.225118  965412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 03:18:29.234175  965412 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 03:18:29.234234  965412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 03:18:29.243247  965412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 03:18:29.252478  965412 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 03:18:29.252533  965412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 03:18:29.262187  965412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 03:18:29.271490  965412 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 03:18:29.271550  965412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 03:18:29.281421  965412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 03:18:29.289870  965412 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 03:18:29.289944  965412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 03:18:29.298976  965412 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 03:18:29.453263  965412 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 03:18:39.039753  965412 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 03:18:39.039835  965412 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 03:18:39.039931  965412 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 03:18:39.040064  965412 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 03:18:39.040201  965412 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 03:18:39.040292  965412 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 03:18:39.041906  965412 out.go:235]   - Generating certificates and keys ...
	I0127 03:18:39.042004  965412 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 03:18:39.042097  965412 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 03:18:39.042190  965412 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 03:18:39.042251  965412 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0127 03:18:39.042319  965412 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0127 03:18:39.042370  965412 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0127 03:18:39.042423  965412 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0127 03:18:39.042563  965412 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-284111 localhost] and IPs [192.168.61.178 127.0.0.1 ::1]
	I0127 03:18:39.042626  965412 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0127 03:18:39.042798  965412 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-284111 localhost] and IPs [192.168.61.178 127.0.0.1 ::1]
	I0127 03:18:39.042911  965412 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 03:18:39.043006  965412 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 03:18:39.043074  965412 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0127 03:18:39.043158  965412 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 03:18:39.043267  965412 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 03:18:39.043359  965412 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 03:18:39.043439  965412 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 03:18:39.043526  965412 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 03:18:39.043598  965412 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 03:18:39.043710  965412 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 03:18:39.043807  965412 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 03:18:39.045144  965412 out.go:235]   - Booting up control plane ...
	I0127 03:18:39.045244  965412 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 03:18:39.045327  965412 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 03:18:39.045407  965412 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 03:18:39.045550  965412 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 03:18:39.045646  965412 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 03:18:39.045707  965412 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 03:18:39.045807  965412 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 03:18:39.045898  965412 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 03:18:39.045994  965412 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 503.82396ms
	I0127 03:18:39.046096  965412 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 03:18:39.046186  965412 kubeadm.go:310] [api-check] The API server is healthy after 5.003089327s
	I0127 03:18:39.046295  965412 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 03:18:39.046472  965412 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 03:18:39.046560  965412 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 03:18:39.046735  965412 kubeadm.go:310] [mark-control-plane] Marking the node bridge-284111 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 03:18:39.046819  965412 kubeadm.go:310] [bootstrap-token] Using token: 9vz6c7.t2ey9xa65s2m5rce
	I0127 03:18:39.048225  965412 out.go:235]   - Configuring RBAC rules ...
	I0127 03:18:39.048342  965412 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 03:18:39.048430  965412 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 03:18:39.048558  965412 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 03:18:39.048663  965412 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 03:18:39.048758  965412 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 03:18:39.048829  965412 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 03:18:39.048972  965412 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 03:18:39.049013  965412 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 03:18:39.049058  965412 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 03:18:39.049064  965412 kubeadm.go:310] 
	I0127 03:18:39.049117  965412 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 03:18:39.049123  965412 kubeadm.go:310] 
	I0127 03:18:39.049204  965412 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 03:18:39.049211  965412 kubeadm.go:310] 
	I0127 03:18:39.049232  965412 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 03:18:39.049289  965412 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 03:18:39.049374  965412 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 03:18:39.049387  965412 kubeadm.go:310] 
	I0127 03:18:39.049462  965412 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 03:18:39.049472  965412 kubeadm.go:310] 
	I0127 03:18:39.049547  965412 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 03:18:39.049555  965412 kubeadm.go:310] 
	I0127 03:18:39.049628  965412 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 03:18:39.049755  965412 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 03:18:39.049867  965412 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 03:18:39.049877  965412 kubeadm.go:310] 
	I0127 03:18:39.049992  965412 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 03:18:39.050101  965412 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 03:18:39.050111  965412 kubeadm.go:310] 
	I0127 03:18:39.050182  965412 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 9vz6c7.t2ey9xa65s2m5rce \
	I0127 03:18:39.050284  965412 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9e03cefec8c3986c8071c28cf6fb7c7bbd45830c579dcdbae8e6ec1676091320 \
	I0127 03:18:39.050318  965412 kubeadm.go:310] 	--control-plane 
	I0127 03:18:39.050325  965412 kubeadm.go:310] 
	I0127 03:18:39.050393  965412 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 03:18:39.050399  965412 kubeadm.go:310] 
	I0127 03:18:39.050483  965412 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9vz6c7.t2ey9xa65s2m5rce \
	I0127 03:18:39.050641  965412 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9e03cefec8c3986c8071c28cf6fb7c7bbd45830c579dcdbae8e6ec1676091320 
	I0127 03:18:39.050656  965412 cni.go:84] Creating CNI manager for "bridge"
	I0127 03:18:39.052074  965412 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 03:18:39.053180  965412 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 03:18:39.065430  965412 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 03:18:39.085517  965412 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 03:18:39.085626  965412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:18:39.085655  965412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-284111 minikube.k8s.io/updated_at=2025_01_27T03_18_39_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=6bb462d349d93b9bf1c5a4f87817e5e9ea11cc95 minikube.k8s.io/name=bridge-284111 minikube.k8s.io/primary=true
	I0127 03:18:39.236877  965412 ops.go:34] apiserver oom_adj: -16
	I0127 03:18:39.239687  965412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:18:39.739742  965412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:18:40.240439  965412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:18:40.740627  965412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:18:41.240543  965412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:18:41.740802  965412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:18:42.239814  965412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:18:42.740769  965412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:18:43.239766  965412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 03:18:43.362731  965412 kubeadm.go:1113] duration metric: took 4.27717357s to wait for elevateKubeSystemPrivileges
	I0127 03:18:43.362780  965412 kubeadm.go:394] duration metric: took 14.212089282s to StartCluster
	I0127 03:18:43.362819  965412 settings.go:142] acquiring lock: {Name:mk329e04da29656a59534015ea2d4cccfe5debac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:18:43.362902  965412 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20316-897624/kubeconfig
	I0127 03:18:43.364337  965412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-897624/kubeconfig: {Name:mkcd87672341028e989830590061bc5270733978 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:18:43.364571  965412 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.178 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 03:18:43.364601  965412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0127 03:18:43.364623  965412 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 03:18:43.364821  965412 addons.go:69] Setting storage-provisioner=true in profile "bridge-284111"
	I0127 03:18:43.364832  965412 config.go:182] Loaded profile config "bridge-284111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 03:18:43.364844  965412 addons.go:238] Setting addon storage-provisioner=true in "bridge-284111"
	I0127 03:18:43.364884  965412 host.go:66] Checking if "bridge-284111" exists ...
	I0127 03:18:43.364893  965412 addons.go:69] Setting default-storageclass=true in profile "bridge-284111"
	I0127 03:18:43.364911  965412 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-284111"
	I0127 03:18:43.365434  965412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:18:43.365478  965412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:18:43.365434  965412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:18:43.365586  965412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:18:43.366316  965412 out.go:177] * Verifying Kubernetes components...
	I0127 03:18:43.367578  965412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 03:18:43.382144  965412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34325
	I0127 03:18:43.382166  965412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33037
	I0127 03:18:43.382709  965412 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:18:43.382710  965412 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:18:43.383321  965412 main.go:141] libmachine: Using API Version  1
	I0127 03:18:43.383343  965412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:18:43.383326  965412 main.go:141] libmachine: Using API Version  1
	I0127 03:18:43.383448  965412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:18:43.383802  965412 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:18:43.383802  965412 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:18:43.384068  965412 main.go:141] libmachine: (bridge-284111) Calling .GetState
	I0127 03:18:43.384497  965412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:18:43.384547  965412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:18:43.388396  965412 addons.go:238] Setting addon default-storageclass=true in "bridge-284111"
	I0127 03:18:43.388448  965412 host.go:66] Checking if "bridge-284111" exists ...
	I0127 03:18:43.388836  965412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:18:43.388888  965412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:18:43.401487  965412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34511
	I0127 03:18:43.401963  965412 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:18:43.402532  965412 main.go:141] libmachine: Using API Version  1
	I0127 03:18:43.402555  965412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:18:43.402948  965412 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:18:43.403176  965412 main.go:141] libmachine: (bridge-284111) Calling .GetState
	I0127 03:18:43.405227  965412 main.go:141] libmachine: (bridge-284111) Calling .DriverName
	I0127 03:18:43.406011  965412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43783
	I0127 03:18:43.406386  965412 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:18:43.406864  965412 main.go:141] libmachine: Using API Version  1
	I0127 03:18:43.406895  965412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:18:43.407221  965412 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:18:43.407649  965412 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 03:18:43.407895  965412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 03:18:43.407952  965412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 03:18:43.409292  965412 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 03:18:43.409316  965412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 03:18:43.409339  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHHostname
	I0127 03:18:43.413101  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:43.413591  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:43.413629  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:43.414006  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHPort
	I0127 03:18:43.414216  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:43.414393  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHUsername
	I0127 03:18:43.414580  965412 sshutil.go:53] new ssh client: &{IP:192.168.61.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/bridge-284111/id_rsa Username:docker}
	I0127 03:18:43.427369  965412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44983
	I0127 03:18:43.427939  965412 main.go:141] libmachine: () Calling .GetVersion
	I0127 03:18:43.429588  965412 main.go:141] libmachine: Using API Version  1
	I0127 03:18:43.429624  965412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 03:18:43.430052  965412 main.go:141] libmachine: () Calling .GetMachineName
	I0127 03:18:43.430287  965412 main.go:141] libmachine: (bridge-284111) Calling .GetState
	I0127 03:18:43.432335  965412 main.go:141] libmachine: (bridge-284111) Calling .DriverName
	I0127 03:18:43.432595  965412 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 03:18:43.432622  965412 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 03:18:43.432642  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHHostname
	I0127 03:18:43.436101  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:43.436528  965412 main.go:141] libmachine: (bridge-284111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:5c:91", ip: ""} in network mk-bridge-284111: {Iface:virbr3 ExpiryTime:2025-01-27 04:18:13 +0000 UTC Type:0 Mac:52:54:00:b1:5c:91 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:bridge-284111 Clientid:01:52:54:00:b1:5c:91}
	I0127 03:18:43.436573  965412 main.go:141] libmachine: (bridge-284111) DBG | domain bridge-284111 has defined IP address 192.168.61.178 and MAC address 52:54:00:b1:5c:91 in network mk-bridge-284111
	I0127 03:18:43.436690  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHPort
	I0127 03:18:43.436907  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHKeyPath
	I0127 03:18:43.437126  965412 main.go:141] libmachine: (bridge-284111) Calling .GetSSHUsername
	I0127 03:18:43.437286  965412 sshutil.go:53] new ssh client: &{IP:192.168.61.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/bridge-284111/id_rsa Username:docker}
	I0127 03:18:43.623874  965412 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 03:18:43.623927  965412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0127 03:18:43.650661  965412 node_ready.go:35] waiting up to 15m0s for node "bridge-284111" to be "Ready" ...
	I0127 03:18:43.667546  965412 node_ready.go:49] node "bridge-284111" has status "Ready":"True"
	I0127 03:18:43.667583  965412 node_ready.go:38] duration metric: took 16.886127ms for node "bridge-284111" to be "Ready" ...
	I0127 03:18:43.667599  965412 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:18:43.687207  965412 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-gmvqc" in "kube-system" namespace to be "Ready" ...
	I0127 03:18:43.743454  965412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 03:18:43.814389  965412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 03:18:44.280907  965412 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0127 03:18:44.793593  965412 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-284111" context rescaled to 1 replicas
	I0127 03:18:44.833718  965412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.09022136s)
	I0127 03:18:44.833772  965412 main.go:141] libmachine: Making call to close driver server
	I0127 03:18:44.833809  965412 main.go:141] libmachine: (bridge-284111) Calling .Close
	I0127 03:18:44.833861  965412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.019432049s)
	I0127 03:18:44.833920  965412 main.go:141] libmachine: Making call to close driver server
	I0127 03:18:44.833938  965412 main.go:141] libmachine: (bridge-284111) Calling .Close
	I0127 03:18:44.834133  965412 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:18:44.834152  965412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:18:44.834178  965412 main.go:141] libmachine: Making call to close driver server
	I0127 03:18:44.834186  965412 main.go:141] libmachine: (bridge-284111) Calling .Close
	I0127 03:18:44.834409  965412 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:18:44.834427  965412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:18:44.834450  965412 main.go:141] libmachine: Making call to close driver server
	I0127 03:18:44.834446  965412 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:18:44.834458  965412 main.go:141] libmachine: (bridge-284111) Calling .Close
	I0127 03:18:44.834464  965412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:18:44.834668  965412 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:18:44.834701  965412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:18:44.848046  965412 main.go:141] libmachine: Making call to close driver server
	I0127 03:18:44.848123  965412 main.go:141] libmachine: (bridge-284111) Calling .Close
	I0127 03:18:44.849692  965412 main.go:141] libmachine: (bridge-284111) DBG | Closing plugin on server side
	I0127 03:18:44.849714  965412 main.go:141] libmachine: Successfully made call to close driver server
	I0127 03:18:44.849724  965412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 03:18:44.852448  965412 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0127 03:18:44.853648  965412 addons.go:514] duration metric: took 1.489024932s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0127 03:18:45.694816  965412 pod_ready.go:103] pod "coredns-668d6bf9bc-gmvqc" in "kube-system" namespace has status "Ready":"False"
	I0127 03:18:46.193044  965412 pod_ready.go:93] pod "coredns-668d6bf9bc-gmvqc" in "kube-system" namespace has status "Ready":"True"
	I0127 03:18:46.193071  965412 pod_ready.go:82] duration metric: took 2.505825793s for pod "coredns-668d6bf9bc-gmvqc" in "kube-system" namespace to be "Ready" ...
	I0127 03:18:46.193081  965412 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-tngtp" in "kube-system" namespace to be "Ready" ...
	I0127 03:18:48.199298  965412 pod_ready.go:103] pod "coredns-668d6bf9bc-tngtp" in "kube-system" namespace has status "Ready":"False"
	I0127 03:18:50.699488  965412 pod_ready.go:103] pod "coredns-668d6bf9bc-tngtp" in "kube-system" namespace has status "Ready":"False"
	I0127 03:18:53.198865  965412 pod_ready.go:103] pod "coredns-668d6bf9bc-tngtp" in "kube-system" namespace has status "Ready":"False"
	I0127 03:18:55.199017  965412 pod_ready.go:98] pod "coredns-668d6bf9bc-tngtp" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 03:18:55 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 03:18:43 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 03:18:43 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 03:18:43 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 03:18:43 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.61.178 HostIPs:[{IP:192.168.61
.178}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2025-01-27 03:18:43 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-01-27 03:18:44 +0000 UTC,FinishedAt:2025-01-27 03:18:54 +0000 UTC,ContainerID:cri-o://60e562f4495ff75f3e610df943b3b20869f7e242b31f7da67bc814acae63373e,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://60e562f4495ff75f3e610df943b3b20869f7e242b31f7da67bc814acae63373e Started:0xc00208e700 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc00250df50} {Name:kube-api-access-qcgg5 MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc00250df60}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0127 03:18:55.199049  965412 pod_ready.go:82] duration metric: took 9.005962015s for pod "coredns-668d6bf9bc-tngtp" in "kube-system" namespace to be "Ready" ...
	E0127 03:18:55.199068  965412 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-668d6bf9bc-tngtp" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 03:18:55 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 03:18:43 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 03:18:43 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 03:18:43 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 03:18:43 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.6
1.178 HostIPs:[{IP:192.168.61.178}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2025-01-27 03:18:43 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-01-27 03:18:44 +0000 UTC,FinishedAt:2025-01-27 03:18:54 +0000 UTC,ContainerID:cri-o://60e562f4495ff75f3e610df943b3b20869f7e242b31f7da67bc814acae63373e,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://60e562f4495ff75f3e610df943b3b20869f7e242b31f7da67bc814acae63373e Started:0xc00208e700 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc00250df50} {Name:kube-api-access-qcgg5 MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc00250df60}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0127 03:18:55.199080  965412 pod_ready.go:79] waiting up to 15m0s for pod "etcd-bridge-284111" in "kube-system" namespace to be "Ready" ...
	I0127 03:18:55.203029  965412 pod_ready.go:93] pod "etcd-bridge-284111" in "kube-system" namespace has status "Ready":"True"
	I0127 03:18:55.203055  965412 pod_ready.go:82] duration metric: took 3.966832ms for pod "etcd-bridge-284111" in "kube-system" namespace to be "Ready" ...
	I0127 03:18:55.203069  965412 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-bridge-284111" in "kube-system" namespace to be "Ready" ...
	I0127 03:18:55.208264  965412 pod_ready.go:93] pod "kube-apiserver-bridge-284111" in "kube-system" namespace has status "Ready":"True"
	I0127 03:18:55.208286  965412 pod_ready.go:82] duration metric: took 5.209412ms for pod "kube-apiserver-bridge-284111" in "kube-system" namespace to be "Ready" ...
	I0127 03:18:55.208296  965412 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-bridge-284111" in "kube-system" namespace to be "Ready" ...
	I0127 03:18:55.215716  965412 pod_ready.go:93] pod "kube-controller-manager-bridge-284111" in "kube-system" namespace has status "Ready":"True"
	I0127 03:18:55.215737  965412 pod_ready.go:82] duration metric: took 7.434091ms for pod "kube-controller-manager-bridge-284111" in "kube-system" namespace to be "Ready" ...
	I0127 03:18:55.215747  965412 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-hrrdg" in "kube-system" namespace to be "Ready" ...
	I0127 03:18:55.220146  965412 pod_ready.go:93] pod "kube-proxy-hrrdg" in "kube-system" namespace has status "Ready":"True"
	I0127 03:18:55.220172  965412 pod_ready.go:82] duration metric: took 4.416975ms for pod "kube-proxy-hrrdg" in "kube-system" namespace to be "Ready" ...
	I0127 03:18:55.220184  965412 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-bridge-284111" in "kube-system" namespace to be "Ready" ...
	I0127 03:18:55.601116  965412 pod_ready.go:93] pod "kube-scheduler-bridge-284111" in "kube-system" namespace has status "Ready":"True"
	I0127 03:18:55.601153  965412 pod_ready.go:82] duration metric: took 380.959358ms for pod "kube-scheduler-bridge-284111" in "kube-system" namespace to be "Ready" ...
	I0127 03:18:55.601167  965412 pod_ready.go:39] duration metric: took 11.933546372s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:18:55.601190  965412 api_server.go:52] waiting for apiserver process to appear ...
	I0127 03:18:55.601249  965412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:18:55.615311  965412 api_server.go:72] duration metric: took 12.250702622s to wait for apiserver process to appear ...
	I0127 03:18:55.615353  965412 api_server.go:88] waiting for apiserver healthz status ...
	I0127 03:18:55.615381  965412 api_server.go:253] Checking apiserver healthz at https://192.168.61.178:8443/healthz ...
	I0127 03:18:55.620633  965412 api_server.go:279] https://192.168.61.178:8443/healthz returned 200:
	ok
	I0127 03:18:55.621585  965412 api_server.go:141] control plane version: v1.32.1
	I0127 03:18:55.621610  965412 api_server.go:131] duration metric: took 6.249694ms to wait for apiserver health ...
	I0127 03:18:55.621618  965412 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 03:18:55.799117  965412 system_pods.go:59] 7 kube-system pods found
	I0127 03:18:55.799150  965412 system_pods.go:61] "coredns-668d6bf9bc-gmvqc" [7dc10376-b832-464e-b10c-89b6155e400a] Running
	I0127 03:18:55.799155  965412 system_pods.go:61] "etcd-bridge-284111" [f8ec6710-5283-4718-a4e5-986b10e7e9e4] Running
	I0127 03:18:55.799159  965412 system_pods.go:61] "kube-apiserver-bridge-284111" [a225e7f8-68a1-4504-8878-cb6ed04545b7] Running
	I0127 03:18:55.799163  965412 system_pods.go:61] "kube-controller-manager-bridge-284111" [a1562f85-9d4e-40bc-b33e-940d1c89fdeb] Running
	I0127 03:18:55.799166  965412 system_pods.go:61] "kube-proxy-hrrdg" [ee95d2f3-c1f4-4d76-a62f-d9e1d344948c] Running
	I0127 03:18:55.799170  965412 system_pods.go:61] "kube-scheduler-bridge-284111" [7a24aa20-3ad9-4968-8f58-512f9bc5d261] Running
	I0127 03:18:55.799173  965412 system_pods.go:61] "storage-provisioner" [bc4e4b69-bac8-4bab-a965-ac49ae78efe4] Running
	I0127 03:18:55.799180  965412 system_pods.go:74] duration metric: took 177.555316ms to wait for pod list to return data ...
	I0127 03:18:55.799187  965412 default_sa.go:34] waiting for default service account to be created ...
	I0127 03:18:55.996306  965412 default_sa.go:45] found service account: "default"
	I0127 03:18:55.996333  965412 default_sa.go:55] duration metric: took 197.140724ms for default service account to be created ...
	I0127 03:18:55.996343  965412 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 03:18:56.198691  965412 system_pods.go:87] 7 kube-system pods found
	I0127 03:18:56.397259  965412 system_pods.go:105] "coredns-668d6bf9bc-gmvqc" [7dc10376-b832-464e-b10c-89b6155e400a] Running
	I0127 03:18:56.397285  965412 system_pods.go:105] "etcd-bridge-284111" [f8ec6710-5283-4718-a4e5-986b10e7e9e4] Running
	I0127 03:18:56.397291  965412 system_pods.go:105] "kube-apiserver-bridge-284111" [a225e7f8-68a1-4504-8878-cb6ed04545b7] Running
	I0127 03:18:56.397296  965412 system_pods.go:105] "kube-controller-manager-bridge-284111" [a1562f85-9d4e-40bc-b33e-940d1c89fdeb] Running
	I0127 03:18:56.397302  965412 system_pods.go:105] "kube-proxy-hrrdg" [ee95d2f3-c1f4-4d76-a62f-d9e1d344948c] Running
	I0127 03:18:56.397306  965412 system_pods.go:105] "kube-scheduler-bridge-284111" [7a24aa20-3ad9-4968-8f58-512f9bc5d261] Running
	I0127 03:18:56.397310  965412 system_pods.go:105] "storage-provisioner" [bc4e4b69-bac8-4bab-a965-ac49ae78efe4] Running
	I0127 03:18:56.397318  965412 system_pods.go:147] duration metric: took 400.968435ms to wait for k8s-apps to be running ...
	I0127 03:18:56.397325  965412 system_svc.go:44] waiting for kubelet service to be running ....
	I0127 03:18:56.397373  965412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 03:18:56.413149  965412 system_svc.go:56] duration metric: took 15.80669ms WaitForService to wait for kubelet
	I0127 03:18:56.413188  965412 kubeadm.go:582] duration metric: took 13.048583267s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 03:18:56.413230  965412 node_conditions.go:102] verifying NodePressure condition ...
	I0127 03:18:56.596472  965412 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 03:18:56.596506  965412 node_conditions.go:123] node cpu capacity is 2
	I0127 03:18:56.596519  965412 node_conditions.go:105] duration metric: took 183.283498ms to run NodePressure ...
	I0127 03:18:56.596532  965412 start.go:241] waiting for startup goroutines ...
	I0127 03:18:56.596538  965412 start.go:246] waiting for cluster config update ...
	I0127 03:18:56.596548  965412 start.go:255] writing updated cluster config ...
	I0127 03:18:56.596809  965412 ssh_runner.go:195] Run: rm -f paused
	I0127 03:18:56.647143  965412 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 03:18:56.649661  965412 out.go:177] * Done! kubectl is now configured to use "bridge-284111" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 27 03:23:21 old-k8s-version-542356 crio[627]: time="2025-01-27 03:23:21.414193629Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737948201414171174,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4f1a3479-e52e-41b5-95f7-8a8406acb9aa name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 03:23:21 old-k8s-version-542356 crio[627]: time="2025-01-27 03:23:21.414820637Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=90186831-d46c-40d9-9613-36eeee700f9f name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:23:21 old-k8s-version-542356 crio[627]: time="2025-01-27 03:23:21.414871336Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=90186831-d46c-40d9-9613-36eeee700f9f name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:23:21 old-k8s-version-542356 crio[627]: time="2025-01-27 03:23:21.414906803Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=90186831-d46c-40d9-9613-36eeee700f9f name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:23:21 old-k8s-version-542356 crio[627]: time="2025-01-27 03:23:21.446153763Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=02be5ceb-173e-4d73-a871-da1eb220bd74 name=/runtime.v1.RuntimeService/Version
	Jan 27 03:23:21 old-k8s-version-542356 crio[627]: time="2025-01-27 03:23:21.446240834Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=02be5ceb-173e-4d73-a871-da1eb220bd74 name=/runtime.v1.RuntimeService/Version
	Jan 27 03:23:21 old-k8s-version-542356 crio[627]: time="2025-01-27 03:23:21.447442173Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=73500216-d7ee-4887-ae1f-c9fba49e1466 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 03:23:21 old-k8s-version-542356 crio[627]: time="2025-01-27 03:23:21.447921644Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737948201447887013,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=73500216-d7ee-4887-ae1f-c9fba49e1466 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 03:23:21 old-k8s-version-542356 crio[627]: time="2025-01-27 03:23:21.448448248Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1a3a5ba6-abee-48da-9b57-7d0d3964b752 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:23:21 old-k8s-version-542356 crio[627]: time="2025-01-27 03:23:21.448494979Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1a3a5ba6-abee-48da-9b57-7d0d3964b752 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:23:21 old-k8s-version-542356 crio[627]: time="2025-01-27 03:23:21.448536534Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1a3a5ba6-abee-48da-9b57-7d0d3964b752 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:23:21 old-k8s-version-542356 crio[627]: time="2025-01-27 03:23:21.483028334Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9baa7ff7-63b4-4a85-ba71-135fffbd7768 name=/runtime.v1.RuntimeService/Version
	Jan 27 03:23:21 old-k8s-version-542356 crio[627]: time="2025-01-27 03:23:21.483122983Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9baa7ff7-63b4-4a85-ba71-135fffbd7768 name=/runtime.v1.RuntimeService/Version
	Jan 27 03:23:21 old-k8s-version-542356 crio[627]: time="2025-01-27 03:23:21.484170327Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=09de23d3-cd9a-4544-bd8a-cd1ff50350ca name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 03:23:21 old-k8s-version-542356 crio[627]: time="2025-01-27 03:23:21.484750085Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737948201484701085,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=09de23d3-cd9a-4544-bd8a-cd1ff50350ca name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 03:23:21 old-k8s-version-542356 crio[627]: time="2025-01-27 03:23:21.485392386Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=865f8398-4f54-4b88-9281-52d145d73967 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:23:21 old-k8s-version-542356 crio[627]: time="2025-01-27 03:23:21.485459294Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=865f8398-4f54-4b88-9281-52d145d73967 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:23:21 old-k8s-version-542356 crio[627]: time="2025-01-27 03:23:21.485501125Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=865f8398-4f54-4b88-9281-52d145d73967 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:23:21 old-k8s-version-542356 crio[627]: time="2025-01-27 03:23:21.523014759Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=56d6989c-cc43-4d9b-987d-cca35b2b1dbf name=/runtime.v1.RuntimeService/Version
	Jan 27 03:23:21 old-k8s-version-542356 crio[627]: time="2025-01-27 03:23:21.523113253Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=56d6989c-cc43-4d9b-987d-cca35b2b1dbf name=/runtime.v1.RuntimeService/Version
	Jan 27 03:23:21 old-k8s-version-542356 crio[627]: time="2025-01-27 03:23:21.524428321Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=90153f15-8345-4f07-90ae-dcd3dccff082 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 03:23:21 old-k8s-version-542356 crio[627]: time="2025-01-27 03:23:21.524855290Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737948201524825028,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=90153f15-8345-4f07-90ae-dcd3dccff082 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 03:23:21 old-k8s-version-542356 crio[627]: time="2025-01-27 03:23:21.525431974Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f79bbcad-044b-45bc-b3dc-cb7632d572a8 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:23:21 old-k8s-version-542356 crio[627]: time="2025-01-27 03:23:21.525529225Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f79bbcad-044b-45bc-b3dc-cb7632d572a8 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 03:23:21 old-k8s-version-542356 crio[627]: time="2025-01-27 03:23:21.525620048Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f79bbcad-044b-45bc-b3dc-cb7632d572a8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jan27 03:00] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053509] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038521] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.063892] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.073990] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.603879] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.822966] systemd-fstab-generator[555]: Ignoring "noauto" option for root device
	[  +0.059849] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073801] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.176250] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.120814] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.231774] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +6.304285] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +0.064590] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.124064] systemd-fstab-generator[1004]: Ignoring "noauto" option for root device
	[ +12.435199] kauditd_printk_skb: 46 callbacks suppressed
	[Jan27 03:04] systemd-fstab-generator[5044]: Ignoring "noauto" option for root device
	[Jan27 03:06] systemd-fstab-generator[5323]: Ignoring "noauto" option for root device
	[  +0.074098] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 03:23:21 up 23 min,  0 users,  load average: 0.04, 0.02, 0.00
	Linux old-k8s-version-542356 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jan 27 03:23:21 old-k8s-version-542356 kubelet[7152]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch.func1.1(0xc0001ef980, 0xc0009a81c0, 0xc000cce900, 0xc0009bb1b0, 0xc0003e4a38, 0xc0009bb1c0, 0xc000cc8d20)
	Jan 27 03:23:21 old-k8s-version-542356 kubelet[7152]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:302 +0x1a5
	Jan 27 03:23:21 old-k8s-version-542356 kubelet[7152]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch.func1
	Jan 27 03:23:21 old-k8s-version-542356 kubelet[7152]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:268 +0x295
	Jan 27 03:23:21 old-k8s-version-542356 kubelet[7152]: goroutine 156 [select]:
	Jan 27 03:23:21 old-k8s-version-542356 kubelet[7152]: net.(*Resolver).lookupIPAddr(0x70c5740, 0x4f7fe40, 0xc0001efe00, 0x48ab5d6, 0x3, 0xc0009ef530, 0x1f, 0x20fb, 0x0, 0x0, ...)
	Jan 27 03:23:21 old-k8s-version-542356 kubelet[7152]:         /usr/local/go/src/net/lookup.go:299 +0x685
	Jan 27 03:23:21 old-k8s-version-542356 kubelet[7152]: net.(*Resolver).internetAddrList(0x70c5740, 0x4f7fe40, 0xc0001efe00, 0x48ab5d6, 0x3, 0xc0009ef530, 0x24, 0x0, 0x0, 0x0, ...)
	Jan 27 03:23:21 old-k8s-version-542356 kubelet[7152]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Jan 27 03:23:21 old-k8s-version-542356 kubelet[7152]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc0001efe00, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc0009ef530, 0x24, 0x0, ...)
	Jan 27 03:23:21 old-k8s-version-542356 kubelet[7152]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Jan 27 03:23:21 old-k8s-version-542356 kubelet[7152]: net.(*Dialer).DialContext(0xc000c65a40, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc0009ef530, 0x24, 0x0, 0x0, 0x0, ...)
	Jan 27 03:23:21 old-k8s-version-542356 kubelet[7152]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Jan 27 03:23:21 old-k8s-version-542356 kubelet[7152]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000c6dca0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc0009ef530, 0x24, 0x60, 0x7ff7ed30a1e0, 0x118, ...)
	Jan 27 03:23:21 old-k8s-version-542356 kubelet[7152]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Jan 27 03:23:21 old-k8s-version-542356 kubelet[7152]: net/http.(*Transport).dial(0xc000994dc0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc0009ef530, 0x24, 0x0, 0x0, 0x0, ...)
	Jan 27 03:23:21 old-k8s-version-542356 kubelet[7152]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Jan 27 03:23:21 old-k8s-version-542356 kubelet[7152]: net/http.(*Transport).dialConn(0xc000994dc0, 0x4f7fe00, 0xc000120018, 0x0, 0xc000474540, 0x5, 0xc0009ef530, 0x24, 0x0, 0xc0004710e0, ...)
	Jan 27 03:23:21 old-k8s-version-542356 kubelet[7152]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Jan 27 03:23:21 old-k8s-version-542356 kubelet[7152]: net/http.(*Transport).dialConnFor(0xc000994dc0, 0xc0008bdd90)
	Jan 27 03:23:21 old-k8s-version-542356 kubelet[7152]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Jan 27 03:23:21 old-k8s-version-542356 kubelet[7152]: created by net/http.(*Transport).queueForDial
	Jan 27 03:23:21 old-k8s-version-542356 kubelet[7152]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Jan 27 03:23:21 old-k8s-version-542356 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 27 03:23:21 old-k8s-version-542356 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-542356 -n old-k8s-version-542356
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-542356 -n old-k8s-version-542356: exit status 2 (253.31753ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-542356" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (354.95s)

                                                
                                    

Test pass (261/312)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 23.77
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.15
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.32.1/json-events 12.07
13 TestDownloadOnly/v1.32.1/preload-exists 0
17 TestDownloadOnly/v1.32.1/LogsDuration 0.07
18 TestDownloadOnly/v1.32.1/DeleteAll 0.15
19 TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.63
22 TestOffline 55.52
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 200.48
31 TestAddons/serial/GCPAuth/Namespaces 2.53
32 TestAddons/serial/GCPAuth/FakeCredentials 10.52
35 TestAddons/parallel/Registry 19.47
37 TestAddons/parallel/InspektorGadget 11.76
38 TestAddons/parallel/MetricsServer 7.18
40 TestAddons/parallel/CSI 64.59
41 TestAddons/parallel/Headlamp 21.19
42 TestAddons/parallel/CloudSpanner 6.57
43 TestAddons/parallel/LocalPath 57.53
44 TestAddons/parallel/NvidiaDevicePlugin 6.51
45 TestAddons/parallel/Yakd 12.57
47 TestAddons/StoppedEnableDisable 91.27
48 TestCertOptions 80.23
49 TestCertExpiration 367.24
51 TestForceSystemdFlag 88.17
52 TestForceSystemdEnv 67.71
54 TestKVMDriverInstallOrUpdate 4.15
58 TestErrorSpam/setup 42.96
59 TestErrorSpam/start 0.37
60 TestErrorSpam/status 0.75
61 TestErrorSpam/pause 1.6
62 TestErrorSpam/unpause 1.59
63 TestErrorSpam/stop 4.86
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 51.04
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 46.58
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.89
75 TestFunctional/serial/CacheCmd/cache/add_local 2.1
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.76
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 34.5
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.32
86 TestFunctional/serial/LogsFileCmd 1.35
87 TestFunctional/serial/InvalidService 4.29
89 TestFunctional/parallel/ConfigCmd 0.4
90 TestFunctional/parallel/DashboardCmd 20.51
91 TestFunctional/parallel/DryRun 0.35
92 TestFunctional/parallel/InternationalLanguage 0.16
93 TestFunctional/parallel/StatusCmd 1.25
97 TestFunctional/parallel/ServiceCmdConnect 9.02
98 TestFunctional/parallel/AddonsCmd 0.13
99 TestFunctional/parallel/PersistentVolumeClaim 47.46
101 TestFunctional/parallel/SSHCmd 0.49
102 TestFunctional/parallel/CpCmd 1.48
103 TestFunctional/parallel/MySQL 30.95
104 TestFunctional/parallel/FileSync 0.21
105 TestFunctional/parallel/CertSync 1.69
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.41
113 TestFunctional/parallel/License 0.56
114 TestFunctional/parallel/ServiceCmd/DeployApp 11.21
115 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
116 TestFunctional/parallel/MountCmd/any-port 11.85
117 TestFunctional/parallel/ProfileCmd/profile_list 0.44
118 TestFunctional/parallel/ProfileCmd/profile_json_output 0.36
119 TestFunctional/parallel/ServiceCmd/List 0.51
120 TestFunctional/parallel/ServiceCmd/JSONOutput 0.5
121 TestFunctional/parallel/ServiceCmd/HTTPS 0.35
122 TestFunctional/parallel/MountCmd/specific-port 1.77
123 TestFunctional/parallel/ServiceCmd/Format 0.36
124 TestFunctional/parallel/ServiceCmd/URL 0.38
134 TestFunctional/parallel/MountCmd/VerifyCleanup 1.36
135 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
136 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
137 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
138 TestFunctional/parallel/Version/short 0.06
139 TestFunctional/parallel/Version/components 0.76
140 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
141 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
142 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
143 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
144 TestFunctional/parallel/ImageCommands/ImageBuild 4.39
145 TestFunctional/parallel/ImageCommands/Setup 4.41
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.38
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.89
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.05
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.4
150 TestFunctional/parallel/ImageCommands/ImageRemove 3
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 5.47
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.88
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 196.19
160 TestMultiControlPlane/serial/DeployApp 6.59
161 TestMultiControlPlane/serial/PingHostFromPods 1.23
162 TestMultiControlPlane/serial/AddWorkerNode 67.83
163 TestMultiControlPlane/serial/NodeLabels 0.07
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.86
165 TestMultiControlPlane/serial/CopyFile 13.31
166 TestMultiControlPlane/serial/StopSecondaryNode 91.61
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.66
168 TestMultiControlPlane/serial/RestartSecondaryNode 52.84
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.85
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 431.51
171 TestMultiControlPlane/serial/DeleteSecondaryNode 18.2
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.62
173 TestMultiControlPlane/serial/StopCluster 272.94
174 TestMultiControlPlane/serial/RestartCluster 112.01
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.62
176 TestMultiControlPlane/serial/AddSecondaryNode 76.69
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.85
181 TestJSONOutput/start/Command 76.16
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.69
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.63
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 7.36
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.21
209 TestMainNoArgs 0.05
210 TestMinikubeProfile 85.3
213 TestMountStart/serial/StartWithMountFirst 24.78
214 TestMountStart/serial/VerifyMountFirst 0.38
215 TestMountStart/serial/StartWithMountSecond 28.62
216 TestMountStart/serial/VerifyMountSecond 0.37
217 TestMountStart/serial/DeleteFirst 0.71
218 TestMountStart/serial/VerifyMountPostDelete 0.38
219 TestMountStart/serial/Stop 1.27
220 TestMountStart/serial/RestartStopped 23.61
221 TestMountStart/serial/VerifyMountPostStop 0.39
224 TestMultiNode/serial/FreshStart2Nodes 137.33
225 TestMultiNode/serial/DeployApp2Nodes 5.5
226 TestMultiNode/serial/PingHostFrom2Pods 0.79
227 TestMultiNode/serial/AddNode 50.49
228 TestMultiNode/serial/MultiNodeLabels 0.06
229 TestMultiNode/serial/ProfileList 0.6
230 TestMultiNode/serial/CopyFile 7.38
231 TestMultiNode/serial/StopNode 2.29
232 TestMultiNode/serial/StartAfterStop 38.98
233 TestMultiNode/serial/RestartKeepsNodes 327.06
234 TestMultiNode/serial/DeleteNode 2.64
235 TestMultiNode/serial/StopMultiNode 182.08
236 TestMultiNode/serial/RestartMultiNode 98.43
237 TestMultiNode/serial/ValidateNameConflict 40.79
244 TestScheduledStopUnix 115.31
248 TestRunningBinaryUpgrade 202.27
253 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
262 TestPause/serial/Start 106.34
263 TestNoKubernetes/serial/StartWithK8s 93.29
264 TestStoppedBinaryUpgrade/Setup 2.36
265 TestStoppedBinaryUpgrade/Upgrade 149.29
266 TestNoKubernetes/serial/StartWithStopK8s 37.91
268 TestNoKubernetes/serial/Start 36.66
269 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
270 TestNoKubernetes/serial/ProfileList 29.5
271 TestNoKubernetes/serial/Stop 1.41
272 TestNoKubernetes/serial/StartNoArgs 32.89
273 TestStoppedBinaryUpgrade/MinikubeLogs 0.77
281 TestNetworkPlugins/group/false 3.94
285 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
289 TestStartStop/group/no-preload/serial/FirstStart 77.72
290 TestStartStop/group/no-preload/serial/DeployApp 10.27
291 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.02
292 TestStartStop/group/no-preload/serial/Stop 91.02
293 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
296 TestStartStop/group/embed-certs/serial/FirstStart 86.73
299 TestStartStop/group/embed-certs/serial/DeployApp 11.28
300 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.95
301 TestStartStop/group/embed-certs/serial/Stop 91.02
302 TestStartStop/group/old-k8s-version/serial/Stop 2.29
303 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
305 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
306 TestStartStop/group/embed-certs/serial/SecondStart 301.98
308 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 84.21
309 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.27
310 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.03
311 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.05
312 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
314 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
315 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
316 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
317 TestStartStop/group/embed-certs/serial/Pause 2.69
319 TestStartStop/group/newest-cni/serial/FirstStart 44.09
320 TestStartStop/group/newest-cni/serial/DeployApp 0
321 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.15
322 TestStartStop/group/newest-cni/serial/Stop 7.32
323 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
324 TestStartStop/group/newest-cni/serial/SecondStart 36.79
325 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
326 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
327 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
328 TestStartStop/group/newest-cni/serial/Pause 2.62
329 TestNetworkPlugins/group/auto/Start 56.86
331 TestNetworkPlugins/group/auto/KubeletFlags 0.21
332 TestNetworkPlugins/group/auto/NetCatPod 11.22
333 TestNetworkPlugins/group/auto/DNS 16.08
334 TestNetworkPlugins/group/auto/Localhost 0.13
335 TestNetworkPlugins/group/auto/HairPin 0.11
336 TestNetworkPlugins/group/kindnet/Start 64.11
337 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
338 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
339 TestNetworkPlugins/group/kindnet/NetCatPod 9.21
340 TestNetworkPlugins/group/kindnet/DNS 0.15
341 TestNetworkPlugins/group/kindnet/Localhost 0.15
342 TestNetworkPlugins/group/kindnet/HairPin 0.12
343 TestNetworkPlugins/group/calico/Start 79
344 TestNetworkPlugins/group/calico/ControllerPod 6.01
345 TestNetworkPlugins/group/calico/KubeletFlags 0.25
346 TestNetworkPlugins/group/calico/NetCatPod 11.23
347 TestNetworkPlugins/group/calico/DNS 0.15
348 TestNetworkPlugins/group/calico/Localhost 0.11
349 TestNetworkPlugins/group/calico/HairPin 0.14
350 TestNetworkPlugins/group/custom-flannel/Start 71.85
351 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
352 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.22
353 TestNetworkPlugins/group/custom-flannel/DNS 0.13
354 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
355 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
356 TestNetworkPlugins/group/enable-default-cni/Start 60.69
357 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.21
358 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.23
359 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
360 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
361 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
362 TestNetworkPlugins/group/flannel/Start 75.55
363 TestNetworkPlugins/group/flannel/ControllerPod 6.01
365 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
366 TestNetworkPlugins/group/flannel/NetCatPod 11.22
367 TestNetworkPlugins/group/flannel/DNS 0.18
368 TestNetworkPlugins/group/flannel/Localhost 0.13
369 TestNetworkPlugins/group/flannel/HairPin 0.12
370 TestNetworkPlugins/group/bridge/Start 58.71
371 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
372 TestNetworkPlugins/group/bridge/NetCatPod 9.22
373 TestNetworkPlugins/group/bridge/DNS 0.14
374 TestNetworkPlugins/group/bridge/Localhost 0.12
375 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.20.0/json-events (23.77s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-342001 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-342001 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (23.770462778s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (23.77s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0127 01:47:56.291194  904889 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0127 01:47:56.291286  904889 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-342001
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-342001: exit status 85 (67.440207ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-342001 | jenkins | v1.35.0 | 27 Jan 25 01:47 UTC |          |
	|         | -p download-only-342001        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 01:47:32
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 01:47:32.565765  904901 out.go:345] Setting OutFile to fd 1 ...
	I0127 01:47:32.565876  904901 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 01:47:32.565884  904901 out.go:358] Setting ErrFile to fd 2...
	I0127 01:47:32.565889  904901 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 01:47:32.566052  904901 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-897624/.minikube/bin
	W0127 01:47:32.566173  904901 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20316-897624/.minikube/config/config.json: open /home/jenkins/minikube-integration/20316-897624/.minikube/config/config.json: no such file or directory
	I0127 01:47:32.566735  904901 out.go:352] Setting JSON to true
	I0127 01:47:32.567746  904901 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8996,"bootTime":1737933457,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 01:47:32.567875  904901 start.go:139] virtualization: kvm guest
	I0127 01:47:32.570303  904901 out.go:97] [download-only-342001] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	W0127 01:47:32.570440  904901 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20316-897624/.minikube/cache/preloaded-tarball: no such file or directory
	I0127 01:47:32.570485  904901 notify.go:220] Checking for updates...
	I0127 01:47:32.571761  904901 out.go:169] MINIKUBE_LOCATION=20316
	I0127 01:47:32.573253  904901 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 01:47:32.574638  904901 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20316-897624/kubeconfig
	I0127 01:47:32.575731  904901 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-897624/.minikube
	I0127 01:47:32.577078  904901 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0127 01:47:32.579163  904901 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0127 01:47:32.579483  904901 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 01:47:32.612168  904901 out.go:97] Using the kvm2 driver based on user configuration
	I0127 01:47:32.612197  904901 start.go:297] selected driver: kvm2
	I0127 01:47:32.612205  904901 start.go:901] validating driver "kvm2" against <nil>
	I0127 01:47:32.612560  904901 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 01:47:32.612659  904901 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20316-897624/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 01:47:32.628255  904901 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 01:47:32.628341  904901 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 01:47:32.628997  904901 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0127 01:47:32.629165  904901 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0127 01:47:32.629202  904901 cni.go:84] Creating CNI manager for ""
	I0127 01:47:32.629273  904901 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 01:47:32.629285  904901 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 01:47:32.629363  904901 start.go:340] cluster config:
	{Name:download-only-342001 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-342001 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 01:47:32.629565  904901 iso.go:125] acquiring lock: {Name:mkbbe08fa3daa2372045069667a0a52e0e34abd9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 01:47:32.631337  904901 out.go:97] Downloading VM boot image ...
	I0127 01:47:32.631372  904901 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20316-897624/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 01:47:42.038395  904901 out.go:97] Starting "download-only-342001" primary control-plane node in "download-only-342001" cluster
	I0127 01:47:42.038430  904901 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 01:47:42.134453  904901 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0127 01:47:42.134499  904901 cache.go:56] Caching tarball of preloaded images
	I0127 01:47:42.134726  904901 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 01:47:42.136802  904901 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0127 01:47:42.136842  904901 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0127 01:47:42.238035  904901 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20316-897624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-342001 host does not exist
	  To start a cluster, run: "minikube start -p download-only-342001"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-342001
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/json-events (12.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-930762 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-930762 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (12.072716706s)
--- PASS: TestDownloadOnly/v1.32.1/json-events (12.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/preload-exists
I0127 01:48:08.720078  904889 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
I0127 01:48:08.720121  904889 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20316-897624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-930762
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-930762: exit status 85 (65.770068ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-342001 | jenkins | v1.35.0 | 27 Jan 25 01:47 UTC |                     |
	|         | -p download-only-342001        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 27 Jan 25 01:47 UTC | 27 Jan 25 01:47 UTC |
	| delete  | -p download-only-342001        | download-only-342001 | jenkins | v1.35.0 | 27 Jan 25 01:47 UTC | 27 Jan 25 01:47 UTC |
	| start   | -o=json --download-only        | download-only-930762 | jenkins | v1.35.0 | 27 Jan 25 01:47 UTC |                     |
	|         | -p download-only-930762        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 01:47:56
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 01:47:56.690435  905159 out.go:345] Setting OutFile to fd 1 ...
	I0127 01:47:56.690546  905159 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 01:47:56.690557  905159 out.go:358] Setting ErrFile to fd 2...
	I0127 01:47:56.690563  905159 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 01:47:56.690775  905159 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-897624/.minikube/bin
	I0127 01:47:56.691398  905159 out.go:352] Setting JSON to true
	I0127 01:47:56.692399  905159 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":9020,"bootTime":1737933457,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 01:47:56.692519  905159 start.go:139] virtualization: kvm guest
	I0127 01:47:56.694493  905159 out.go:97] [download-only-930762] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 01:47:56.694656  905159 notify.go:220] Checking for updates...
	I0127 01:47:56.696066  905159 out.go:169] MINIKUBE_LOCATION=20316
	I0127 01:47:56.697424  905159 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 01:47:56.698603  905159 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20316-897624/kubeconfig
	I0127 01:47:56.699731  905159 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-897624/.minikube
	I0127 01:47:56.700889  905159 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0127 01:47:56.703074  905159 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0127 01:47:56.703374  905159 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 01:47:56.735604  905159 out.go:97] Using the kvm2 driver based on user configuration
	I0127 01:47:56.735637  905159 start.go:297] selected driver: kvm2
	I0127 01:47:56.735643  905159 start.go:901] validating driver "kvm2" against <nil>
	I0127 01:47:56.736001  905159 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 01:47:56.736092  905159 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20316-897624/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 01:47:56.751964  905159 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 01:47:56.752016  905159 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 01:47:56.752600  905159 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0127 01:47:56.752780  905159 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0127 01:47:56.752812  905159 cni.go:84] Creating CNI manager for ""
	I0127 01:47:56.752877  905159 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 01:47:56.752890  905159 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 01:47:56.752993  905159 start.go:340] cluster config:
	{Name:download-only-930762 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:download-only-930762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 01:47:56.753118  905159 iso.go:125] acquiring lock: {Name:mkbbe08fa3daa2372045069667a0a52e0e34abd9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 01:47:56.754856  905159 out.go:97] Starting "download-only-930762" primary control-plane node in "download-only-930762" cluster
	I0127 01:47:56.754880  905159 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 01:47:57.271625  905159 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.1/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 01:47:57.271669  905159 cache.go:56] Caching tarball of preloaded images
	I0127 01:47:57.271869  905159 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 01:47:57.273798  905159 out.go:97] Downloading Kubernetes v1.32.1 preload ...
	I0127 01:47:57.273849  905159 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 ...
	I0127 01:47:57.373243  905159 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.1/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2af56a340efcc3949401b47b9a5d537 -> /home/jenkins/minikube-integration/20316-897624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-930762 host does not exist
	  To start a cluster, run: "minikube start -p download-only-930762"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.1/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-930762
--- PASS: TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
I0127 01:48:09.345085  904889 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-522891 --alsologtostderr --binary-mirror http://127.0.0.1:41687 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-522891" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-522891
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestOffline (55.52s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-922784 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-922784 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (54.451595878s)
helpers_test.go:175: Cleaning up "offline-crio-922784" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-922784
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-922784: (1.070042792s)
--- PASS: TestOffline (55.52s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-903003
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-903003: exit status 85 (56.839407ms)

                                                
                                                
-- stdout --
	* Profile "addons-903003" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-903003"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-903003
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-903003: exit status 85 (57.712139ms)

                                                
                                                
-- stdout --
	* Profile "addons-903003" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-903003"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (200.48s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-903003 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-903003 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m20.483161619s)
--- PASS: TestAddons/Setup (200.48s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (2.53s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-903003 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-903003 get secret gcp-auth -n new-namespace
addons_test.go:583: (dbg) Non-zero exit: kubectl --context addons-903003 get secret gcp-auth -n new-namespace: exit status 1 (79.67984ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): secrets "gcp-auth" not found

                                                
                                                
** /stderr **
addons_test.go:575: (dbg) Run:  kubectl --context addons-903003 logs -l app=gcp-auth -n gcp-auth
I0127 01:51:31.041565  904889 retry.go:31] will retry after 2.25740975s: %!w(<nil>): gcp-auth container logs: 
-- stdout --
	2025/01/27 01:51:30 GCP Auth Webhook started!
	2025/01/27 01:51:30 Ready to marshal response ...
	2025/01/27 01:51:30 Ready to write response ...

                                                
                                                
-- /stdout --
addons_test.go:583: (dbg) Run:  kubectl --context addons-903003 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (2.53s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.52s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-903003 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-903003 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1212f3bd-35e3-41c2-9a82-bcfd56ffc644] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1212f3bd-35e3-41c2-9a82-bcfd56ffc644] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.002885454s
addons_test.go:633: (dbg) Run:  kubectl --context addons-903003 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-903003 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-903003 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.52s)

                                                
                                    
x
+
TestAddons/parallel/Registry (19.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 5.859668ms
I0127 01:51:52.318282  904889 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0127 01:51:52.318309  904889 kapi.go:107] duration metric: took 6.36936ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-nqsd9" [9f2c82f7-c7e4-40be-ace7-48ae48867e71] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003900875s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-wg8ff" [aa6c2e8e-0eae-47b5-b60e-a503a7c6de28] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003858566s
addons_test.go:331: (dbg) Run:  kubectl --context addons-903003 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-903003 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-903003 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.649677784s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-903003 ip
2025/01/27 01:52:10 [DEBUG] GET http://192.168.39.61:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-903003 addons disable registry --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-903003 addons disable registry --alsologtostderr -v=1: (1.639895885s)
--- PASS: TestAddons/parallel/Registry (19.47s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.76s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-q4hwz" [e7cdd4a5-6b44-4cef-87e2-d76dce51d6c4] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004545779s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-903003 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-903003 addons disable inspektor-gadget --alsologtostderr -v=1: (5.754315159s)
--- PASS: TestAddons/parallel/InspektorGadget (11.76s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.18s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 5.536172ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-p5dvw" [62418a49-5783-4e0b-9352-6dbf4a067aac] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004260781s
addons_test.go:402: (dbg) Run:  kubectl --context addons-903003 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-903003 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-903003 addons disable metrics-server --alsologtostderr -v=1: (1.077273494s)
--- PASS: TestAddons/parallel/MetricsServer (7.18s)

                                                
                                    
x
+
TestAddons/parallel/CSI (64.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0127 01:51:52.311956  904889 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 6.379271ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-903003 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903003 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903003 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903003 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903003 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903003 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903003 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903003 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903003 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903003 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903003 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903003 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903003 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903003 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903003 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903003 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-903003 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [b7c05388-df1b-4d12-8df9-df9a081fc246] Pending
helpers_test.go:344: "task-pv-pod" [b7c05388-df1b-4d12-8df9-df9a081fc246] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [b7c05388-df1b-4d12-8df9-df9a081fc246] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.018691435s
addons_test.go:511: (dbg) Run:  kubectl --context addons-903003 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-903003 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-903003 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-903003 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-903003 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-903003 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903003 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903003 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903003 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903003 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903003 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903003 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903003 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903003 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903003 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903003 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903003 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903003 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903003 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903003 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903003 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903003 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903003 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903003 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-903003 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [7b2cb23c-c4c6-4c1f-9802-50368d92e77d] Pending
helpers_test.go:344: "task-pv-pod-restore" [7b2cb23c-c4c6-4c1f-9802-50368d92e77d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [7b2cb23c-c4c6-4c1f-9802-50368d92e77d] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.006019007s
addons_test.go:553: (dbg) Run:  kubectl --context addons-903003 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-903003 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-903003 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-903003 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-903003 addons disable volumesnapshots --alsologtostderr -v=1: (1.06726587s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-903003 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-903003 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.786041018s)
--- PASS: TestAddons/parallel/CSI (64.59s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (21.19s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-903003 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-69d78d796f-4ltm4" [fecb946e-2ad2-469e-bb9a-2344c4986b98] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-69d78d796f-4ltm4" [fecb946e-2ad2-469e-bb9a-2344c4986b98] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.003389789s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-903003 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-903003 addons disable headlamp --alsologtostderr -v=1: (6.264227644s)
--- PASS: TestAddons/parallel/Headlamp (21.19s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5d76cffbc-2whlp" [0b771401-9cb3-4991-bee4-40008041e3e5] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.020665424s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-903003 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.57s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (57.53s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-903003 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-903003 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903003 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903003 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903003 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903003 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903003 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903003 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903003 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903003 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903003 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [01d9ba67-e6d6-4390-b0e4-6171ea0ae7bd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [01d9ba67-e6d6-4390-b0e4-6171ea0ae7bd] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [01d9ba67-e6d6-4390-b0e4-6171ea0ae7bd] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.0053947s
addons_test.go:906: (dbg) Run:  kubectl --context addons-903003 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-903003 ssh "cat /opt/local-path-provisioner/pvc-04e58c29-5f8a-434e-a75a-c12322d29d11_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-903003 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-903003 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-903003 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-903003 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.641752087s)
--- PASS: TestAddons/parallel/LocalPath (57.53s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.51s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-lw57c" [a69ac6c0-f8c9-4eb0-9fb3-35c983c843b7] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003974281s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-903003 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.51s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-xt5t7" [e66cebea-8597-4c04-902b-9abb2bf15e95] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00432416s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-903003 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-903003 addons disable yakd --alsologtostderr -v=1: (6.55947972s)
--- PASS: TestAddons/parallel/Yakd (12.57s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.27s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-903003
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-903003: (1m30.966330868s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-903003
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-903003
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-903003
--- PASS: TestAddons/StoppedEnableDisable (91.27s)

                                                
                                    
x
+
TestCertOptions (80.23s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-919407 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-919407 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m18.757638932s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-919407 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-919407 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-919407 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-919407" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-919407
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-919407: (1.005529262s)
--- PASS: TestCertOptions (80.23s)

                                                
                                    
x
+
TestCertExpiration (367.24s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-591242 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-591242 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m8.069015349s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-591242 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-591242 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (1m58.163054211s)
helpers_test.go:175: Cleaning up "cert-expiration-591242" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-591242
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-591242: (1.008078004s)
--- PASS: TestCertExpiration (367.24s)

                                                
                                    
x
+
TestForceSystemdFlag (88.17s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-409157 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-409157 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m27.136915089s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-409157 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-409157" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-409157
--- PASS: TestForceSystemdFlag (88.17s)

                                                
                                    
x
+
TestForceSystemdEnv (67.71s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-096099 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-096099 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m6.69051945s)
helpers_test.go:175: Cleaning up "force-systemd-env-096099" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-096099
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-096099: (1.021017204s)
--- PASS: TestForceSystemdEnv (67.71s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.15s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0127 02:51:41.757308  904889 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 02:51:41.757473  904889 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0127 02:51:41.793714  904889 install.go:62] docker-machine-driver-kvm2: exit status 1
W0127 02:51:41.794154  904889 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0127 02:51:41.794230  904889 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate135477113/001/docker-machine-driver-kvm2
I0127 02:51:41.996318  904889 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate135477113/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0] Decompressors:map[bz2:0xc000785e80 gz:0xc000785e88 tar:0xc000785dc0 tar.bz2:0xc000785dd0 tar.gz:0xc000785de0 tar.xz:0xc000785e30 tar.zst:0xc000785e40 tbz2:0xc000785dd0 tgz:0xc000785de0 txz:0xc000785e30 tzst:0xc000785e40 xz:0xc000785ea0 zip:0xc000785eb0 zst:0xc000785ea8] Getters:map[file:0xc0007769d0 http:0xc000872d20 https:0xc000872e10] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I0127 02:51:41.996384  904889 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate135477113/001/docker-machine-driver-kvm2
I0127 02:51:44.202266  904889 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 02:51:44.202375  904889 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0127 02:51:44.230474  904889 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0127 02:51:44.230506  904889 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0127 02:51:44.230576  904889 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0127 02:51:44.230604  904889 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate135477113/002/docker-machine-driver-kvm2
I0127 02:51:44.276374  904889 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate135477113/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0] Decompressors:map[bz2:0xc000785e80 gz:0xc000785e88 tar:0xc000785dc0 tar.bz2:0xc000785dd0 tar.gz:0xc000785de0 tar.xz:0xc000785e30 tar.zst:0xc000785e40 tbz2:0xc000785dd0 tgz:0xc000785de0 txz:0xc000785e30 tzst:0xc000785e40 xz:0xc000785ea0 zip:0xc000785eb0 zst:0xc000785ea8] Getters:map[file:0xc001a33e10 http:0xc00079af50 https:0xc00079afa0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I0127 02:51:44.276433  904889 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate135477113/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (4.15s)

                                                
                                    
x
+
TestErrorSpam/setup (42.96s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-759680 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-759680 --driver=kvm2  --container-runtime=crio
E0127 01:56:33.571187  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/client.crt: no such file or directory" logger="UnhandledError"
E0127 01:56:33.577598  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/client.crt: no such file or directory" logger="UnhandledError"
E0127 01:56:33.588971  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/client.crt: no such file or directory" logger="UnhandledError"
E0127 01:56:33.610441  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/client.crt: no such file or directory" logger="UnhandledError"
E0127 01:56:33.651932  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/client.crt: no such file or directory" logger="UnhandledError"
E0127 01:56:33.733448  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/client.crt: no such file or directory" logger="UnhandledError"
E0127 01:56:33.895045  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/client.crt: no such file or directory" logger="UnhandledError"
E0127 01:56:34.216824  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/client.crt: no such file or directory" logger="UnhandledError"
E0127 01:56:34.858950  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/client.crt: no such file or directory" logger="UnhandledError"
E0127 01:56:36.140383  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/client.crt: no such file or directory" logger="UnhandledError"
E0127 01:56:38.703391  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/client.crt: no such file or directory" logger="UnhandledError"
E0127 01:56:43.825339  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/client.crt: no such file or directory" logger="UnhandledError"
E0127 01:56:54.067743  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-759680 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-759680 --driver=kvm2  --container-runtime=crio: (42.95600148s)
--- PASS: TestErrorSpam/setup (42.96s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-759680 --log_dir /tmp/nospam-759680 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-759680 --log_dir /tmp/nospam-759680 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-759680 --log_dir /tmp/nospam-759680 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.75s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-759680 --log_dir /tmp/nospam-759680 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-759680 --log_dir /tmp/nospam-759680 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-759680 --log_dir /tmp/nospam-759680 status
--- PASS: TestErrorSpam/status (0.75s)

                                                
                                    
x
+
TestErrorSpam/pause (1.6s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-759680 --log_dir /tmp/nospam-759680 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-759680 --log_dir /tmp/nospam-759680 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-759680 --log_dir /tmp/nospam-759680 pause
--- PASS: TestErrorSpam/pause (1.60s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.59s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-759680 --log_dir /tmp/nospam-759680 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-759680 --log_dir /tmp/nospam-759680 unpause
E0127 01:57:14.549878  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-759680 --log_dir /tmp/nospam-759680 unpause
--- PASS: TestErrorSpam/unpause (1.59s)

                                                
                                    
x
+
TestErrorSpam/stop (4.86s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-759680 --log_dir /tmp/nospam-759680 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-759680 --log_dir /tmp/nospam-759680 stop: (2.337900411s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-759680 --log_dir /tmp/nospam-759680 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-759680 --log_dir /tmp/nospam-759680 stop: (1.436951143s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-759680 --log_dir /tmp/nospam-759680 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-759680 --log_dir /tmp/nospam-759680 stop: (1.086568783s)
--- PASS: TestErrorSpam/stop (4.86s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20316-897624/.minikube/files/etc/test/nested/copy/904889/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (51.04s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-308251 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0127 01:57:55.513147  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-308251 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (51.04032331s)
--- PASS: TestFunctional/serial/StartWithProxy (51.04s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (46.58s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0127 01:58:11.713706  904889 config.go:182] Loaded profile config "functional-308251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-308251 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-308251 --alsologtostderr -v=8: (46.582035715s)
functional_test.go:663: soft start took 46.582888701s for "functional-308251" cluster.
I0127 01:58:58.296107  904889 config.go:182] Loaded profile config "functional-308251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/SoftStart (46.58s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-308251 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.89s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-308251 cache add registry.k8s.io/pause:3.1: (1.303742313s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-308251 cache add registry.k8s.io/pause:3.3: (1.30956584s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-308251 cache add registry.k8s.io/pause:latest: (1.277464073s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.89s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-308251 /tmp/TestFunctionalserialCacheCmdcacheadd_local2371288854/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 cache add minikube-local-cache-test:functional-308251
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-308251 cache add minikube-local-cache-test:functional-308251: (1.797221823s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 cache delete minikube-local-cache-test:functional-308251
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-308251
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.76s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-308251 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (220.71119ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-308251 cache reload: (1.06135733s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.76s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 kubectl -- --context functional-308251 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-308251 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.5s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-308251 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0127 01:59:17.437055  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-308251 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.495011772s)
functional_test.go:761: restart took 34.495143522s for "functional-308251" cluster.
I0127 01:59:41.337675  904889 config.go:182] Loaded profile config "functional-308251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/ExtraConfig (34.50s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-308251 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-308251 logs: (1.322514791s)
--- PASS: TestFunctional/serial/LogsCmd (1.32s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 logs --file /tmp/TestFunctionalserialLogsFileCmd3100231385/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-308251 logs --file /tmp/TestFunctionalserialLogsFileCmd3100231385/001/logs.txt: (1.344074021s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.35s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.29s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-308251 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-308251
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-308251: exit status 115 (283.806493ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.88:30105 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-308251 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.29s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-308251 config get cpus: exit status 14 (69.210161ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-308251 config get cpus: exit status 14 (54.700393ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (20.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-308251 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-308251 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 912424: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (20.51s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-308251 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-308251 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (178.776822ms)

                                                
                                                
-- stdout --
	* [functional-308251] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20316
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20316-897624/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-897624/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 01:59:50.077175  912241 out.go:345] Setting OutFile to fd 1 ...
	I0127 01:59:50.079996  912241 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 01:59:50.080058  912241 out.go:358] Setting ErrFile to fd 2...
	I0127 01:59:50.080075  912241 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 01:59:50.080443  912241 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-897624/.minikube/bin
	I0127 01:59:50.081308  912241 out.go:352] Setting JSON to false
	I0127 01:59:50.082945  912241 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":9733,"bootTime":1737933457,"procs":228,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 01:59:50.083141  912241 start.go:139] virtualization: kvm guest
	I0127 01:59:50.084680  912241 out.go:177] * [functional-308251] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 01:59:50.086392  912241 out.go:177]   - MINIKUBE_LOCATION=20316
	I0127 01:59:50.086530  912241 notify.go:220] Checking for updates...
	I0127 01:59:50.088614  912241 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 01:59:50.091427  912241 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20316-897624/kubeconfig
	I0127 01:59:50.093254  912241 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-897624/.minikube
	I0127 01:59:50.094382  912241 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 01:59:50.095579  912241 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 01:59:50.097401  912241 config.go:182] Loaded profile config "functional-308251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 01:59:50.098053  912241 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 01:59:50.098151  912241 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 01:59:50.116506  912241 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41223
	I0127 01:59:50.117242  912241 main.go:141] libmachine: () Calling .GetVersion
	I0127 01:59:50.117891  912241 main.go:141] libmachine: Using API Version  1
	I0127 01:59:50.117924  912241 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 01:59:50.118361  912241 main.go:141] libmachine: () Calling .GetMachineName
	I0127 01:59:50.118568  912241 main.go:141] libmachine: (functional-308251) Calling .DriverName
	I0127 01:59:50.118904  912241 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 01:59:50.119356  912241 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 01:59:50.119395  912241 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 01:59:50.134873  912241 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44815
	I0127 01:59:50.135323  912241 main.go:141] libmachine: () Calling .GetVersion
	I0127 01:59:50.135862  912241 main.go:141] libmachine: Using API Version  1
	I0127 01:59:50.135900  912241 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 01:59:50.136233  912241 main.go:141] libmachine: () Calling .GetMachineName
	I0127 01:59:50.136528  912241 main.go:141] libmachine: (functional-308251) Calling .DriverName
	I0127 01:59:50.172226  912241 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 01:59:50.173514  912241 start.go:297] selected driver: kvm2
	I0127 01:59:50.173529  912241 start.go:901] validating driver "kvm2" against &{Name:functional-308251 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-308251 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.88 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 01:59:50.173628  912241 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 01:59:50.175531  912241 out.go:201] 
	W0127 01:59:50.176939  912241 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0127 01:59:50.178094  912241 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-308251 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-308251 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-308251 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (158.689938ms)

                                                
                                                
-- stdout --
	* [functional-308251] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20316
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20316-897624/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-897624/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 01:59:49.894303  912194 out.go:345] Setting OutFile to fd 1 ...
	I0127 01:59:49.894405  912194 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 01:59:49.894410  912194 out.go:358] Setting ErrFile to fd 2...
	I0127 01:59:49.894415  912194 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 01:59:49.894689  912194 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-897624/.minikube/bin
	I0127 01:59:49.895257  912194 out.go:352] Setting JSON to false
	I0127 01:59:49.896289  912194 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":9733,"bootTime":1737933457,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 01:59:49.896404  912194 start.go:139] virtualization: kvm guest
	I0127 01:59:49.897937  912194 out.go:177] * [functional-308251] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0127 01:59:49.899348  912194 notify.go:220] Checking for updates...
	I0127 01:59:49.899355  912194 out.go:177]   - MINIKUBE_LOCATION=20316
	I0127 01:59:49.900453  912194 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 01:59:49.901602  912194 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20316-897624/kubeconfig
	I0127 01:59:49.903078  912194 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-897624/.minikube
	I0127 01:59:49.904430  912194 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 01:59:49.905562  912194 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 01:59:49.907476  912194 config.go:182] Loaded profile config "functional-308251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 01:59:49.908200  912194 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 01:59:49.908257  912194 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 01:59:49.926411  912194 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41637
	I0127 01:59:49.926897  912194 main.go:141] libmachine: () Calling .GetVersion
	I0127 01:59:49.927492  912194 main.go:141] libmachine: Using API Version  1
	I0127 01:59:49.927516  912194 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 01:59:49.927915  912194 main.go:141] libmachine: () Calling .GetMachineName
	I0127 01:59:49.928151  912194 main.go:141] libmachine: (functional-308251) Calling .DriverName
	I0127 01:59:49.928448  912194 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 01:59:49.928867  912194 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 01:59:49.928963  912194 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 01:59:49.948408  912194 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39369
	I0127 01:59:49.948874  912194 main.go:141] libmachine: () Calling .GetVersion
	I0127 01:59:49.949483  912194 main.go:141] libmachine: Using API Version  1
	I0127 01:59:49.949519  912194 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 01:59:49.949952  912194 main.go:141] libmachine: () Calling .GetMachineName
	I0127 01:59:49.950169  912194 main.go:141] libmachine: (functional-308251) Calling .DriverName
	I0127 01:59:49.991620  912194 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0127 01:59:49.992777  912194 start.go:297] selected driver: kvm2
	I0127 01:59:49.992796  912194 start.go:901] validating driver "kvm2" against &{Name:functional-308251 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-308251 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.88 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 01:59:49.993014  912194 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 01:59:49.995198  912194 out.go:201] 
	W0127 01:59:49.996293  912194 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0127 01:59:49.997408  912194 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-308251 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-308251 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-sgt86" [a21c8929-20da-4296-9f53-48150cf80ab1] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-sgt86" [a21c8929-20da-4296-9f53-48150cf80ab1] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.004740831s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.88:31774
functional_test.go:1675: http://192.168.39.88:31774: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-sgt86

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.88:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.88:31774
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.02s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (47.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [dee978e4-9045-46bb-9785-ee3c63ff6aa7] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003831712s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-308251 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-308251 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-308251 get pvc myclaim -o=json
I0127 01:59:56.103998  904889 retry.go:31] will retry after 2.920219145s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:2f582c15-4380-4ed1-ba28-f5f4e4c599fc ResourceVersion:746 Generation:0 CreationTimestamp:2025-01-27 01:59:56 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName:pvc-2f582c15-4380-4ed1-ba28-f5f4e4c599fc StorageClassName:0xc001b98940 VolumeMode:0xc001b98950 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-308251 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-308251 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [59d5bf72-e5f8-4cda-952a-1f09758168ff] Pending
helpers_test.go:344: "sp-pod" [59d5bf72-e5f8-4cda-952a-1f09758168ff] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [59d5bf72-e5f8-4cda-952a-1f09758168ff] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 20.004565813s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-308251 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-308251 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-308251 delete -f testdata/storage-provisioner/pod.yaml: (2.636460865s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-308251 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8a61f1b0-efd1-4f2c-97f7-f525c24c5403] Pending
helpers_test.go:344: "sp-pod" [8a61f1b0-efd1-4f2c-97f7-f525c24c5403] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8a61f1b0-efd1-4f2c-97f7-f525c24c5403] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.004025841s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-308251 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (47.46s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 ssh -n functional-308251 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 cp functional-308251:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2450856789/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 ssh -n functional-308251 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 ssh -n functional-308251 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (30.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-308251 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-68pfc" [1f03bd6e-f39a-4c29-9bae-6d67f55edc50] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-68pfc" [1f03bd6e-f39a-4c29-9bae-6d67f55edc50] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 28.004074078s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-308251 exec mysql-58ccfd96bb-68pfc -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-308251 exec mysql-58ccfd96bb-68pfc -- mysql -ppassword -e "show databases;": exit status 1 (115.061235ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0127 02:00:32.770223  904889 retry.go:31] will retry after 555.655149ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-308251 exec mysql-58ccfd96bb-68pfc -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-308251 exec mysql-58ccfd96bb-68pfc -- mysql -ppassword -e "show databases;": exit status 1 (128.027803ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0127 02:00:33.454807  904889 retry.go:31] will retry after 1.844190352s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-308251 exec mysql-58ccfd96bb-68pfc -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (30.95s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/904889/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 ssh "sudo cat /etc/test/nested/copy/904889/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/904889.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 ssh "sudo cat /etc/ssl/certs/904889.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/904889.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 ssh "sudo cat /usr/share/ca-certificates/904889.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/9048892.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 ssh "sudo cat /etc/ssl/certs/9048892.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/9048892.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 ssh "sudo cat /usr/share/ca-certificates/9048892.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-308251 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-308251 ssh "sudo systemctl is-active docker": exit status 1 (213.812761ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-308251 ssh "sudo systemctl is-active containerd": exit status 1 (200.272698ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-308251 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-308251 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-8k9tw" [837d42dd-1826-4c65-8256-9daa345f3ba6] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-8k9tw" [837d42dd-1826-4c65-8256-9daa345f3ba6] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.004068865s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-308251 /tmp/TestFunctionalparallelMountCmdany-port1600518187/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1737943188757140746" to /tmp/TestFunctionalparallelMountCmdany-port1600518187/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1737943188757140746" to /tmp/TestFunctionalparallelMountCmdany-port1600518187/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1737943188757140746" to /tmp/TestFunctionalparallelMountCmdany-port1600518187/001/test-1737943188757140746
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-308251 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (265.117829ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0127 01:59:49.022605  904889 retry.go:31] will retry after 424.680438ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 27 01:59 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 27 01:59 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 27 01:59 test-1737943188757140746
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 ssh cat /mount-9p/test-1737943188757140746
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-308251 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [85682be6-d0d2-4cec-8f52-9aec05a56708] Pending
helpers_test.go:344: "busybox-mount" [85682be6-d0d2-4cec-8f52-9aec05a56708] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [85682be6-d0d2-4cec-8f52-9aec05a56708] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [85682be6-d0d2-4cec-8f52-9aec05a56708] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 9.003662811s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-308251 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-308251 /tmp/TestFunctionalparallelMountCmdany-port1600518187/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (11.85s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "381.505844ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "58.843545ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "304.286956ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "56.90522ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 service list -o json
functional_test.go:1494: Took "499.225883ms" to run "out/minikube-linux-amd64 -p functional-308251 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.88:30745
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-308251 /tmp/TestFunctionalparallelMountCmdspecific-port250333986/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-308251 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (273.121694ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0127 02:00:00.884993  904889 retry.go:31] will retry after 297.22377ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-308251 /tmp/TestFunctionalparallelMountCmdspecific-port250333986/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-308251 ssh "sudo umount -f /mount-9p": exit status 1 (264.429633ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-308251 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-308251 /tmp/TestFunctionalparallelMountCmdspecific-port250333986/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.88:30745
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-308251 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3405197315/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-308251 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3405197315/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-308251 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3405197315/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-308251 ssh "findmnt -T" /mount1: exit status 1 (302.460466ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0127 02:00:02.689346  904889 retry.go:31] will retry after 330.826017ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-308251 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-308251 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3405197315/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-308251 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3405197315/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-308251 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3405197315/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-308251 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.1
registry.k8s.io/kube-proxy:v1.32.1
registry.k8s.io/kube-controller-manager:v1.32.1
registry.k8s.io/kube-apiserver:v1.32.1
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-308251
localhost/kicbase/echo-server:functional-308251
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20241108-5c6d2daf
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-308251 image ls --format short --alsologtostderr:
I0127 02:00:27.046851  914182 out.go:345] Setting OutFile to fd 1 ...
I0127 02:00:27.047016  914182 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 02:00:27.047065  914182 out.go:358] Setting ErrFile to fd 2...
I0127 02:00:27.047075  914182 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 02:00:27.047417  914182 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-897624/.minikube/bin
I0127 02:00:27.048163  914182 config.go:182] Loaded profile config "functional-308251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 02:00:27.048270  914182 config.go:182] Loaded profile config "functional-308251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 02:00:27.048695  914182 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 02:00:27.048761  914182 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 02:00:27.064653  914182 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36985
I0127 02:00:27.065345  914182 main.go:141] libmachine: () Calling .GetVersion
I0127 02:00:27.066044  914182 main.go:141] libmachine: Using API Version  1
I0127 02:00:27.066072  914182 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 02:00:27.066461  914182 main.go:141] libmachine: () Calling .GetMachineName
I0127 02:00:27.066737  914182 main.go:141] libmachine: (functional-308251) Calling .GetState
I0127 02:00:27.069285  914182 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 02:00:27.069342  914182 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 02:00:27.085152  914182 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45751
I0127 02:00:27.085638  914182 main.go:141] libmachine: () Calling .GetVersion
I0127 02:00:27.086146  914182 main.go:141] libmachine: Using API Version  1
I0127 02:00:27.086170  914182 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 02:00:27.086558  914182 main.go:141] libmachine: () Calling .GetMachineName
I0127 02:00:27.086732  914182 main.go:141] libmachine: (functional-308251) Calling .DriverName
I0127 02:00:27.086908  914182 ssh_runner.go:195] Run: systemctl --version
I0127 02:00:27.086968  914182 main.go:141] libmachine: (functional-308251) Calling .GetSSHHostname
I0127 02:00:27.089989  914182 main.go:141] libmachine: (functional-308251) DBG | domain functional-308251 has defined MAC address 52:54:00:80:60:b4 in network mk-functional-308251
I0127 02:00:27.090460  914182 main.go:141] libmachine: (functional-308251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:60:b4", ip: ""} in network mk-functional-308251: {Iface:virbr1 ExpiryTime:2025-01-27 02:57:35 +0000 UTC Type:0 Mac:52:54:00:80:60:b4 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:functional-308251 Clientid:01:52:54:00:80:60:b4}
I0127 02:00:27.090494  914182 main.go:141] libmachine: (functional-308251) DBG | domain functional-308251 has defined IP address 192.168.39.88 and MAC address 52:54:00:80:60:b4 in network mk-functional-308251
I0127 02:00:27.090627  914182 main.go:141] libmachine: (functional-308251) Calling .GetSSHPort
I0127 02:00:27.090870  914182 main.go:141] libmachine: (functional-308251) Calling .GetSSHKeyPath
I0127 02:00:27.091064  914182 main.go:141] libmachine: (functional-308251) Calling .GetSSHUsername
I0127 02:00:27.091208  914182 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/functional-308251/id_rsa Username:docker}
I0127 02:00:27.193410  914182 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 02:00:27.286685  914182 main.go:141] libmachine: Making call to close driver server
I0127 02:00:27.286710  914182 main.go:141] libmachine: (functional-308251) Calling .Close
I0127 02:00:27.287033  914182 main.go:141] libmachine: Successfully made call to close driver server
I0127 02:00:27.287063  914182 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 02:00:27.287083  914182 main.go:141] libmachine: Making call to close driver server
I0127 02:00:27.287091  914182 main.go:141] libmachine: (functional-308251) Calling .Close
I0127 02:00:27.287341  914182 main.go:141] libmachine: Successfully made call to close driver server
I0127 02:00:27.287362  914182 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-308251 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| localhost/kicbase/echo-server           | functional-308251  | 9056ab77afb8e | 4.94MB |
| localhost/minikube-local-cache-test     | functional-308251  | d3529f6187aca | 3.33kB |
| registry.k8s.io/kube-proxy              | v1.32.1            | e29f9c7391fd9 | 95.3MB |
| docker.io/kindest/kindnetd              | v20241108-5c6d2daf | 50415e5d05f05 | 95MB   |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-apiserver          | v1.32.1            | 95c0bda56fc4d | 98.1MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.16-0           | a9e7e6b294baf | 151MB  |
| registry.k8s.io/kube-controller-manager | v1.32.1            | 019ee182b58e2 | 90.8MB |
| docker.io/library/nginx                 | latest             | 9bea9f2796e23 | 196MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/kube-scheduler          | v1.32.1            | 2b0d6572d062c | 70.6MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-308251 image ls --format table --alsologtostderr:
I0127 02:00:27.584868  914293 out.go:345] Setting OutFile to fd 1 ...
I0127 02:00:27.585005  914293 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 02:00:27.585017  914293 out.go:358] Setting ErrFile to fd 2...
I0127 02:00:27.585023  914293 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 02:00:27.585227  914293 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-897624/.minikube/bin
I0127 02:00:27.585913  914293 config.go:182] Loaded profile config "functional-308251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 02:00:27.586038  914293 config.go:182] Loaded profile config "functional-308251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 02:00:27.586429  914293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 02:00:27.586482  914293 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 02:00:27.601743  914293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40945
I0127 02:00:27.602212  914293 main.go:141] libmachine: () Calling .GetVersion
I0127 02:00:27.602809  914293 main.go:141] libmachine: Using API Version  1
I0127 02:00:27.602836  914293 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 02:00:27.603165  914293 main.go:141] libmachine: () Calling .GetMachineName
I0127 02:00:27.603381  914293 main.go:141] libmachine: (functional-308251) Calling .GetState
I0127 02:00:27.605287  914293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 02:00:27.605328  914293 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 02:00:27.620326  914293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41051
I0127 02:00:27.620738  914293 main.go:141] libmachine: () Calling .GetVersion
I0127 02:00:27.621243  914293 main.go:141] libmachine: Using API Version  1
I0127 02:00:27.621268  914293 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 02:00:27.621614  914293 main.go:141] libmachine: () Calling .GetMachineName
I0127 02:00:27.621814  914293 main.go:141] libmachine: (functional-308251) Calling .DriverName
I0127 02:00:27.622037  914293 ssh_runner.go:195] Run: systemctl --version
I0127 02:00:27.622079  914293 main.go:141] libmachine: (functional-308251) Calling .GetSSHHostname
I0127 02:00:27.624611  914293 main.go:141] libmachine: (functional-308251) DBG | domain functional-308251 has defined MAC address 52:54:00:80:60:b4 in network mk-functional-308251
I0127 02:00:27.624958  914293 main.go:141] libmachine: (functional-308251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:60:b4", ip: ""} in network mk-functional-308251: {Iface:virbr1 ExpiryTime:2025-01-27 02:57:35 +0000 UTC Type:0 Mac:52:54:00:80:60:b4 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:functional-308251 Clientid:01:52:54:00:80:60:b4}
I0127 02:00:27.625000  914293 main.go:141] libmachine: (functional-308251) DBG | domain functional-308251 has defined IP address 192.168.39.88 and MAC address 52:54:00:80:60:b4 in network mk-functional-308251
I0127 02:00:27.625114  914293 main.go:141] libmachine: (functional-308251) Calling .GetSSHPort
I0127 02:00:27.625299  914293 main.go:141] libmachine: (functional-308251) Calling .GetSSHKeyPath
I0127 02:00:27.625461  914293 main.go:141] libmachine: (functional-308251) Calling .GetSSHUsername
I0127 02:00:27.625667  914293 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/functional-308251/id_rsa Username:docker}
I0127 02:00:27.708043  914293 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 02:00:27.747370  914293 main.go:141] libmachine: Making call to close driver server
I0127 02:00:27.747386  914293 main.go:141] libmachine: (functional-308251) Calling .Close
I0127 02:00:27.747720  914293 main.go:141] libmachine: Successfully made call to close driver server
I0127 02:00:27.747747  914293 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 02:00:27.747760  914293 main.go:141] libmachine: (functional-308251) DBG | Closing plugin on server side
I0127 02:00:27.747771  914293 main.go:141] libmachine: Making call to close driver server
I0127 02:00:27.747782  914293 main.go:141] libmachine: (functional-308251) Calling .Close
I0127 02:00:27.748098  914293 main.go:141] libmachine: Successfully made call to close driver server
I0127 02:00:27.748123  914293 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 02:00:27.748205  914293 main.go:141] libmachine: (functional-308251) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-308251 image ls --format json --alsologtostderr:
[{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"9bea9f2796e236cb18c2b3ad561ff29f655d1001f9ec7247a0bc5e08d25652a1","repoDigests":["docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a","docker.io/library/nginx@sha256:2426c815287ed75a3a33dd28512eba4f0f783946844209ccf3fa8990817a4eb9"],"repoTags":["docker.io/library/nginx:latest"],"size":"195872148"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-308251"],"size":"4943877"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064
c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e","registry.k8s.io/kube-scheduler@sha256:e2b8e00ff17f8b0427e34d28897d7bf6f7a63ec48913ea01d4082ab91ca28476"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.1"],"size":"70649158"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s
-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"d3529f6187acabceaea2c4636d92498d56f8acea4696d9f219f76b7959f3aacf","repoDigests":["localhost/minikube-local-cache-test@sha256:956e38a29566d15fd33b573dae543d81d4df726a263996dcf4de4bdc59873755"],"repoTags":["localhost/minikube-local-cache-test:functional-308251"],"size":"3330"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.
k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e","repoDigests":["docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3","docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"94963761"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145c
be47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":["registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990","registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"151021823"},{"id":"95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a","repoDigests":["registry.k8s.io/kube
-apiserver@sha256:769a11bfd73df7db947d51b0f7a3a60383a0338904d6944cced924d33f0d7286","registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.1"],"size":"98051552"},{"id":"019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954","registry.k8s.io/kube-controller-manager@sha256:c9067d10dcf5ca45b2be9260f3b15e9c94e05fd8039c53341a23d3b4cf0cc619"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.1"],"size":"90793286"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a","repoDigests":["registry.k8s.io/kube-proxy@sha
256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5","registry.k8s.io/kube-proxy@sha256:a739122f1b5b17e2db96006120ad5fb9a3c654da07322bcaa62263c403ef69a8"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.1"],"size":"95271321"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-308251 image ls --format json --alsologtostderr:
I0127 02:00:27.350482  914238 out.go:345] Setting OutFile to fd 1 ...
I0127 02:00:27.350652  914238 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 02:00:27.350695  914238 out.go:358] Setting ErrFile to fd 2...
I0127 02:00:27.350706  914238 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 02:00:27.350997  914238 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-897624/.minikube/bin
I0127 02:00:27.351921  914238 config.go:182] Loaded profile config "functional-308251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 02:00:27.352077  914238 config.go:182] Loaded profile config "functional-308251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 02:00:27.352622  914238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 02:00:27.352673  914238 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 02:00:27.369510  914238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33365
I0127 02:00:27.370066  914238 main.go:141] libmachine: () Calling .GetVersion
I0127 02:00:27.370720  914238 main.go:141] libmachine: Using API Version  1
I0127 02:00:27.370742  914238 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 02:00:27.371145  914238 main.go:141] libmachine: () Calling .GetMachineName
I0127 02:00:27.371409  914238 main.go:141] libmachine: (functional-308251) Calling .GetState
I0127 02:00:27.373547  914238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 02:00:27.373615  914238 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 02:00:27.390648  914238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44411
I0127 02:00:27.391072  914238 main.go:141] libmachine: () Calling .GetVersion
I0127 02:00:27.391510  914238 main.go:141] libmachine: Using API Version  1
I0127 02:00:27.391530  914238 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 02:00:27.391916  914238 main.go:141] libmachine: () Calling .GetMachineName
I0127 02:00:27.392104  914238 main.go:141] libmachine: (functional-308251) Calling .DriverName
I0127 02:00:27.392333  914238 ssh_runner.go:195] Run: systemctl --version
I0127 02:00:27.392371  914238 main.go:141] libmachine: (functional-308251) Calling .GetSSHHostname
I0127 02:00:27.395583  914238 main.go:141] libmachine: (functional-308251) DBG | domain functional-308251 has defined MAC address 52:54:00:80:60:b4 in network mk-functional-308251
I0127 02:00:27.396114  914238 main.go:141] libmachine: (functional-308251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:60:b4", ip: ""} in network mk-functional-308251: {Iface:virbr1 ExpiryTime:2025-01-27 02:57:35 +0000 UTC Type:0 Mac:52:54:00:80:60:b4 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:functional-308251 Clientid:01:52:54:00:80:60:b4}
I0127 02:00:27.396159  914238 main.go:141] libmachine: (functional-308251) DBG | domain functional-308251 has defined IP address 192.168.39.88 and MAC address 52:54:00:80:60:b4 in network mk-functional-308251
I0127 02:00:27.396272  914238 main.go:141] libmachine: (functional-308251) Calling .GetSSHPort
I0127 02:00:27.396476  914238 main.go:141] libmachine: (functional-308251) Calling .GetSSHKeyPath
I0127 02:00:27.396605  914238 main.go:141] libmachine: (functional-308251) Calling .GetSSHUsername
I0127 02:00:27.396735  914238 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/functional-308251/id_rsa Username:docker}
I0127 02:00:27.478737  914238 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 02:00:27.523859  914238 main.go:141] libmachine: Making call to close driver server
I0127 02:00:27.523876  914238 main.go:141] libmachine: (functional-308251) Calling .Close
I0127 02:00:27.524285  914238 main.go:141] libmachine: (functional-308251) DBG | Closing plugin on server side
I0127 02:00:27.524357  914238 main.go:141] libmachine: Successfully made call to close driver server
I0127 02:00:27.524371  914238 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 02:00:27.524381  914238 main.go:141] libmachine: Making call to close driver server
I0127 02:00:27.524388  914238 main.go:141] libmachine: (functional-308251) Calling .Close
I0127 02:00:27.524651  914238 main.go:141] libmachine: Successfully made call to close driver server
I0127 02:00:27.524665  914238 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-308251 image ls --format yaml --alsologtostderr:
- id: 50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e
repoDigests:
- docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "94963761"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests:
- registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "151021823"
- id: 95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:769a11bfd73df7db947d51b0f7a3a60383a0338904d6944cced924d33f0d7286
- registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.1
size: "98051552"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: d3529f6187acabceaea2c4636d92498d56f8acea4696d9f219f76b7959f3aacf
repoDigests:
- localhost/minikube-local-cache-test@sha256:956e38a29566d15fd33b573dae543d81d4df726a263996dcf4de4bdc59873755
repoTags:
- localhost/minikube-local-cache-test:functional-308251
size: "3330"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 9bea9f2796e236cb18c2b3ad561ff29f655d1001f9ec7247a0bc5e08d25652a1
repoDigests:
- docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a
- docker.io/library/nginx@sha256:2426c815287ed75a3a33dd28512eba4f0f783946844209ccf3fa8990817a4eb9
repoTags:
- docker.io/library/nginx:latest
size: "195872148"
- id: 019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954
- registry.k8s.io/kube-controller-manager@sha256:c9067d10dcf5ca45b2be9260f3b15e9c94e05fd8039c53341a23d3b4cf0cc619
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.1
size: "90793286"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-308251
size: "4943877"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5
- registry.k8s.io/kube-proxy@sha256:a739122f1b5b17e2db96006120ad5fb9a3c654da07322bcaa62263c403ef69a8
repoTags:
- registry.k8s.io/kube-proxy:v1.32.1
size: "95271321"
- id: 2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e
- registry.k8s.io/kube-scheduler@sha256:e2b8e00ff17f8b0427e34d28897d7bf6f7a63ec48913ea01d4082ab91ca28476
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.1
size: "70649158"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-308251 image ls --format yaml --alsologtostderr:
I0127 02:00:27.049728  914183 out.go:345] Setting OutFile to fd 1 ...
I0127 02:00:27.049894  914183 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 02:00:27.049908  914183 out.go:358] Setting ErrFile to fd 2...
I0127 02:00:27.049915  914183 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 02:00:27.050274  914183 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-897624/.minikube/bin
I0127 02:00:27.051195  914183 config.go:182] Loaded profile config "functional-308251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 02:00:27.051371  914183 config.go:182] Loaded profile config "functional-308251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 02:00:27.051999  914183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 02:00:27.052060  914183 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 02:00:27.067656  914183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40329
I0127 02:00:27.068173  914183 main.go:141] libmachine: () Calling .GetVersion
I0127 02:00:27.068973  914183 main.go:141] libmachine: Using API Version  1
I0127 02:00:27.069000  914183 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 02:00:27.069408  914183 main.go:141] libmachine: () Calling .GetMachineName
I0127 02:00:27.069690  914183 main.go:141] libmachine: (functional-308251) Calling .GetState
I0127 02:00:27.071808  914183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 02:00:27.071865  914183 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 02:00:27.086970  914183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42025
I0127 02:00:27.087466  914183 main.go:141] libmachine: () Calling .GetVersion
I0127 02:00:27.087970  914183 main.go:141] libmachine: Using API Version  1
I0127 02:00:27.087990  914183 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 02:00:27.088303  914183 main.go:141] libmachine: () Calling .GetMachineName
I0127 02:00:27.088655  914183 main.go:141] libmachine: (functional-308251) Calling .DriverName
I0127 02:00:27.088899  914183 ssh_runner.go:195] Run: systemctl --version
I0127 02:00:27.088948  914183 main.go:141] libmachine: (functional-308251) Calling .GetSSHHostname
I0127 02:00:27.091837  914183 main.go:141] libmachine: (functional-308251) DBG | domain functional-308251 has defined MAC address 52:54:00:80:60:b4 in network mk-functional-308251
I0127 02:00:27.092281  914183 main.go:141] libmachine: (functional-308251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:60:b4", ip: ""} in network mk-functional-308251: {Iface:virbr1 ExpiryTime:2025-01-27 02:57:35 +0000 UTC Type:0 Mac:52:54:00:80:60:b4 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:functional-308251 Clientid:01:52:54:00:80:60:b4}
I0127 02:00:27.092347  914183 main.go:141] libmachine: (functional-308251) DBG | domain functional-308251 has defined IP address 192.168.39.88 and MAC address 52:54:00:80:60:b4 in network mk-functional-308251
I0127 02:00:27.092650  914183 main.go:141] libmachine: (functional-308251) Calling .GetSSHPort
I0127 02:00:27.092818  914183 main.go:141] libmachine: (functional-308251) Calling .GetSSHKeyPath
I0127 02:00:27.092955  914183 main.go:141] libmachine: (functional-308251) Calling .GetSSHUsername
I0127 02:00:27.093094  914183 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/functional-308251/id_rsa Username:docker}
I0127 02:00:27.184972  914183 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 02:00:27.260703  914183 main.go:141] libmachine: Making call to close driver server
I0127 02:00:27.260725  914183 main.go:141] libmachine: (functional-308251) Calling .Close
I0127 02:00:27.261032  914183 main.go:141] libmachine: Successfully made call to close driver server
I0127 02:00:27.261053  914183 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 02:00:27.261071  914183 main.go:141] libmachine: Making call to close driver server
I0127 02:00:27.261080  914183 main.go:141] libmachine: (functional-308251) Calling .Close
I0127 02:00:27.261081  914183 main.go:141] libmachine: (functional-308251) DBG | Closing plugin on server side
I0127 02:00:27.261354  914183 main.go:141] libmachine: Successfully made call to close driver server
I0127 02:00:27.261367  914183 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-308251 ssh pgrep buildkitd: exit status 1 (212.024861ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 image build -t localhost/my-image:functional-308251 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-308251 image build -t localhost/my-image:functional-308251 testdata/build --alsologtostderr: (3.955998028s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-308251 image build -t localhost/my-image:functional-308251 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> fa803b69b8c
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-308251
--> a8791450c90
Successfully tagged localhost/my-image:functional-308251
a8791450c909d519f0a9e939c5885d7b8e20fdcdecbbc5ced68cdf025570c89e
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-308251 image build -t localhost/my-image:functional-308251 testdata/build --alsologtostderr:
I0127 02:00:27.532327  914283 out.go:345] Setting OutFile to fd 1 ...
I0127 02:00:27.532458  914283 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 02:00:27.532471  914283 out.go:358] Setting ErrFile to fd 2...
I0127 02:00:27.532478  914283 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 02:00:27.532684  914283 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-897624/.minikube/bin
I0127 02:00:27.533587  914283 config.go:182] Loaded profile config "functional-308251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 02:00:27.534263  914283 config.go:182] Loaded profile config "functional-308251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 02:00:27.534684  914283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 02:00:27.534730  914283 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 02:00:27.552705  914283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41945
I0127 02:00:27.553324  914283 main.go:141] libmachine: () Calling .GetVersion
I0127 02:00:27.553889  914283 main.go:141] libmachine: Using API Version  1
I0127 02:00:27.553912  914283 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 02:00:27.554494  914283 main.go:141] libmachine: () Calling .GetMachineName
I0127 02:00:27.554966  914283 main.go:141] libmachine: (functional-308251) Calling .GetState
I0127 02:00:27.557474  914283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 02:00:27.557526  914283 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 02:00:27.573411  914283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38709
I0127 02:00:27.573957  914283 main.go:141] libmachine: () Calling .GetVersion
I0127 02:00:27.574501  914283 main.go:141] libmachine: Using API Version  1
I0127 02:00:27.574528  914283 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 02:00:27.574872  914283 main.go:141] libmachine: () Calling .GetMachineName
I0127 02:00:27.575092  914283 main.go:141] libmachine: (functional-308251) Calling .DriverName
I0127 02:00:27.575325  914283 ssh_runner.go:195] Run: systemctl --version
I0127 02:00:27.575359  914283 main.go:141] libmachine: (functional-308251) Calling .GetSSHHostname
I0127 02:00:27.578902  914283 main.go:141] libmachine: (functional-308251) DBG | domain functional-308251 has defined MAC address 52:54:00:80:60:b4 in network mk-functional-308251
I0127 02:00:27.579365  914283 main.go:141] libmachine: (functional-308251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:60:b4", ip: ""} in network mk-functional-308251: {Iface:virbr1 ExpiryTime:2025-01-27 02:57:35 +0000 UTC Type:0 Mac:52:54:00:80:60:b4 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:functional-308251 Clientid:01:52:54:00:80:60:b4}
I0127 02:00:27.579395  914283 main.go:141] libmachine: (functional-308251) DBG | domain functional-308251 has defined IP address 192.168.39.88 and MAC address 52:54:00:80:60:b4 in network mk-functional-308251
I0127 02:00:27.579545  914283 main.go:141] libmachine: (functional-308251) Calling .GetSSHPort
I0127 02:00:27.579715  914283 main.go:141] libmachine: (functional-308251) Calling .GetSSHKeyPath
I0127 02:00:27.579889  914283 main.go:141] libmachine: (functional-308251) Calling .GetSSHUsername
I0127 02:00:27.580029  914283 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/functional-308251/id_rsa Username:docker}
I0127 02:00:27.663752  914283 build_images.go:161] Building image from path: /tmp/build.711177967.tar
I0127 02:00:27.663828  914283 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0127 02:00:27.673225  914283 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.711177967.tar
I0127 02:00:27.677620  914283 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.711177967.tar: stat -c "%s %y" /var/lib/minikube/build/build.711177967.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.711177967.tar': No such file or directory
I0127 02:00:27.677657  914283 ssh_runner.go:362] scp /tmp/build.711177967.tar --> /var/lib/minikube/build/build.711177967.tar (3072 bytes)
I0127 02:00:27.702903  914283 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.711177967
I0127 02:00:27.712908  914283 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.711177967 -xf /var/lib/minikube/build/build.711177967.tar
I0127 02:00:27.724090  914283 crio.go:315] Building image: /var/lib/minikube/build/build.711177967
I0127 02:00:27.724163  914283 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-308251 /var/lib/minikube/build/build.711177967 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0127 02:00:31.408419  914283 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-308251 /var/lib/minikube/build/build.711177967 --cgroup-manager=cgroupfs: (3.684215046s)
I0127 02:00:31.408560  914283 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.711177967
I0127 02:00:31.419783  914283 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.711177967.tar
I0127 02:00:31.430499  914283 build_images.go:217] Built localhost/my-image:functional-308251 from /tmp/build.711177967.tar
I0127 02:00:31.430541  914283 build_images.go:133] succeeded building to: functional-308251
I0127 02:00:31.430546  914283 build_images.go:134] failed building to: 
I0127 02:00:31.430577  914283 main.go:141] libmachine: Making call to close driver server
I0127 02:00:31.430595  914283 main.go:141] libmachine: (functional-308251) Calling .Close
I0127 02:00:31.430912  914283 main.go:141] libmachine: Successfully made call to close driver server
I0127 02:00:31.430936  914283 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 02:00:31.430946  914283 main.go:141] libmachine: Making call to close driver server
I0127 02:00:31.430955  914283 main.go:141] libmachine: (functional-308251) Calling .Close
I0127 02:00:31.430988  914283 main.go:141] libmachine: (functional-308251) DBG | Closing plugin on server side
I0127 02:00:31.431200  914283 main.go:141] libmachine: Successfully made call to close driver server
I0127 02:00:31.431215  914283 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 02:00:31.431254  914283 main.go:141] libmachine: (functional-308251) DBG | Closing plugin on server side
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (4.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (4.384640406s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-308251
--- PASS: TestFunctional/parallel/ImageCommands/Setup (4.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 image load --daemon kicbase/echo-server:functional-308251 --alsologtostderr
2025/01/27 02:00:10 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-308251 image load --daemon kicbase/echo-server:functional-308251 --alsologtostderr: (4.090035502s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 image load --daemon kicbase/echo-server:functional-308251 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-308251
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 image load --daemon kicbase/echo-server:functional-308251 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 image save kicbase/echo-server:functional-308251 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-308251 image save kicbase/echo-server:functional-308251 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (1.403733713s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 image rm kicbase/echo-server:functional-308251 --alsologtostderr
functional_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p functional-308251 image rm kicbase/echo-server:functional-308251 --alsologtostderr: (2.676618324s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (3.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (5.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-linux-amd64 -p functional-308251 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (5.18628149s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (5.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-308251
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-308251 image save --daemon kicbase/echo-server:functional-308251 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-308251
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.88s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-308251
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-308251
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-308251
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (196.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-226508 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0127 02:01:33.566135  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:02:01.279974  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-226508 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m15.529714668s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (196.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-226508 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-226508 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-226508 -- rollout status deployment/busybox: (4.474918816s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-226508 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-226508 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-226508 -- exec busybox-58667487b6-2jvdk -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-226508 -- exec busybox-58667487b6-6k667 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-226508 -- exec busybox-58667487b6-xpnst -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-226508 -- exec busybox-58667487b6-2jvdk -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-226508 -- exec busybox-58667487b6-6k667 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-226508 -- exec busybox-58667487b6-xpnst -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-226508 -- exec busybox-58667487b6-2jvdk -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-226508 -- exec busybox-58667487b6-6k667 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-226508 -- exec busybox-58667487b6-xpnst -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-226508 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-226508 -- exec busybox-58667487b6-2jvdk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-226508 -- exec busybox-58667487b6-2jvdk -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-226508 -- exec busybox-58667487b6-6k667 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-226508 -- exec busybox-58667487b6-6k667 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-226508 -- exec busybox-58667487b6-xpnst -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-226508 -- exec busybox-58667487b6-xpnst -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (67.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-226508 -v=7 --alsologtostderr
E0127 02:04:48.567244  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/functional-308251/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:04:48.573748  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/functional-308251/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:04:48.585232  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/functional-308251/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:04:48.606698  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/functional-308251/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:04:48.648123  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/functional-308251/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:04:48.729663  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/functional-308251/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:04:48.891733  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/functional-308251/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:04:49.213240  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/functional-308251/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:04:49.854572  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/functional-308251/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:04:51.136586  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/functional-308251/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:04:53.698248  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/functional-308251/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:04:58.819653  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/functional-308251/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:05:09.061204  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/functional-308251/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-226508 -v=7 --alsologtostderr: (1m6.953549715s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (67.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-226508 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 cp testdata/cp-test.txt ha-226508:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 ssh -n ha-226508 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 cp ha-226508:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1404919692/001/cp-test_ha-226508.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 ssh -n ha-226508 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 cp ha-226508:/home/docker/cp-test.txt ha-226508-m02:/home/docker/cp-test_ha-226508_ha-226508-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 ssh -n ha-226508 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 ssh -n ha-226508-m02 "sudo cat /home/docker/cp-test_ha-226508_ha-226508-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 cp ha-226508:/home/docker/cp-test.txt ha-226508-m03:/home/docker/cp-test_ha-226508_ha-226508-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 ssh -n ha-226508 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 ssh -n ha-226508-m03 "sudo cat /home/docker/cp-test_ha-226508_ha-226508-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 cp ha-226508:/home/docker/cp-test.txt ha-226508-m04:/home/docker/cp-test_ha-226508_ha-226508-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 ssh -n ha-226508 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 ssh -n ha-226508-m04 "sudo cat /home/docker/cp-test_ha-226508_ha-226508-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 cp testdata/cp-test.txt ha-226508-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 ssh -n ha-226508-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 cp ha-226508-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1404919692/001/cp-test_ha-226508-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 ssh -n ha-226508-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 cp ha-226508-m02:/home/docker/cp-test.txt ha-226508:/home/docker/cp-test_ha-226508-m02_ha-226508.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 ssh -n ha-226508-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 ssh -n ha-226508 "sudo cat /home/docker/cp-test_ha-226508-m02_ha-226508.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 cp ha-226508-m02:/home/docker/cp-test.txt ha-226508-m03:/home/docker/cp-test_ha-226508-m02_ha-226508-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 ssh -n ha-226508-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 ssh -n ha-226508-m03 "sudo cat /home/docker/cp-test_ha-226508-m02_ha-226508-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 cp ha-226508-m02:/home/docker/cp-test.txt ha-226508-m04:/home/docker/cp-test_ha-226508-m02_ha-226508-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 ssh -n ha-226508-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 ssh -n ha-226508-m04 "sudo cat /home/docker/cp-test_ha-226508-m02_ha-226508-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 cp testdata/cp-test.txt ha-226508-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 ssh -n ha-226508-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 cp ha-226508-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1404919692/001/cp-test_ha-226508-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 ssh -n ha-226508-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 cp ha-226508-m03:/home/docker/cp-test.txt ha-226508:/home/docker/cp-test_ha-226508-m03_ha-226508.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 ssh -n ha-226508-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 ssh -n ha-226508 "sudo cat /home/docker/cp-test_ha-226508-m03_ha-226508.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 cp ha-226508-m03:/home/docker/cp-test.txt ha-226508-m02:/home/docker/cp-test_ha-226508-m03_ha-226508-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 ssh -n ha-226508-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 ssh -n ha-226508-m02 "sudo cat /home/docker/cp-test_ha-226508-m03_ha-226508-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 cp ha-226508-m03:/home/docker/cp-test.txt ha-226508-m04:/home/docker/cp-test_ha-226508-m03_ha-226508-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 ssh -n ha-226508-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 ssh -n ha-226508-m04 "sudo cat /home/docker/cp-test_ha-226508-m03_ha-226508-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 cp testdata/cp-test.txt ha-226508-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 ssh -n ha-226508-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 cp ha-226508-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1404919692/001/cp-test_ha-226508-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 ssh -n ha-226508-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 cp ha-226508-m04:/home/docker/cp-test.txt ha-226508:/home/docker/cp-test_ha-226508-m04_ha-226508.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 ssh -n ha-226508-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 ssh -n ha-226508 "sudo cat /home/docker/cp-test_ha-226508-m04_ha-226508.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 cp ha-226508-m04:/home/docker/cp-test.txt ha-226508-m02:/home/docker/cp-test_ha-226508-m04_ha-226508-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 ssh -n ha-226508-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 ssh -n ha-226508-m02 "sudo cat /home/docker/cp-test_ha-226508-m04_ha-226508-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 cp ha-226508-m04:/home/docker/cp-test.txt ha-226508-m03:/home/docker/cp-test_ha-226508-m04_ha-226508-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 ssh -n ha-226508-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 ssh -n ha-226508-m03 "sudo cat /home/docker/cp-test_ha-226508-m04_ha-226508-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 node stop m02 -v=7 --alsologtostderr
E0127 02:05:29.542577  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/functional-308251/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:06:10.504788  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/functional-308251/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:06:33.566331  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-226508 node stop m02 -v=7 --alsologtostderr: (1m30.98521078s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-226508 status -v=7 --alsologtostderr: exit status 7 (626.819064ms)

                                                
                                                
-- stdout --
	ha-226508
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-226508-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-226508-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-226508-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 02:06:56.737974  919036 out.go:345] Setting OutFile to fd 1 ...
	I0127 02:06:56.738116  919036 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:06:56.738132  919036 out.go:358] Setting ErrFile to fd 2...
	I0127 02:06:56.738136  919036 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:06:56.738390  919036 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-897624/.minikube/bin
	I0127 02:06:56.738597  919036 out.go:352] Setting JSON to false
	I0127 02:06:56.738636  919036 mustload.go:65] Loading cluster: ha-226508
	I0127 02:06:56.738756  919036 notify.go:220] Checking for updates...
	I0127 02:06:56.739137  919036 config.go:182] Loaded profile config "ha-226508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 02:06:56.739162  919036 status.go:174] checking status of ha-226508 ...
	I0127 02:06:56.739605  919036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 02:06:56.739657  919036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:06:56.758941  919036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40777
	I0127 02:06:56.759383  919036 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:06:56.760014  919036 main.go:141] libmachine: Using API Version  1
	I0127 02:06:56.760042  919036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:06:56.760415  919036 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:06:56.760632  919036 main.go:141] libmachine: (ha-226508) Calling .GetState
	I0127 02:06:56.762308  919036 status.go:371] ha-226508 host status = "Running" (err=<nil>)
	I0127 02:06:56.762326  919036 host.go:66] Checking if "ha-226508" exists ...
	I0127 02:06:56.762622  919036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 02:06:56.762661  919036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:06:56.777387  919036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39671
	I0127 02:06:56.777858  919036 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:06:56.778452  919036 main.go:141] libmachine: Using API Version  1
	I0127 02:06:56.778479  919036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:06:56.778773  919036 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:06:56.778987  919036 main.go:141] libmachine: (ha-226508) Calling .GetIP
	I0127 02:06:56.782182  919036 main.go:141] libmachine: (ha-226508) DBG | domain ha-226508 has defined MAC address 52:54:00:c2:06:c8 in network mk-ha-226508
	I0127 02:06:56.782636  919036 main.go:141] libmachine: (ha-226508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:06:c8", ip: ""} in network mk-ha-226508: {Iface:virbr1 ExpiryTime:2025-01-27 03:00:54 +0000 UTC Type:0 Mac:52:54:00:c2:06:c8 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-226508 Clientid:01:52:54:00:c2:06:c8}
	I0127 02:06:56.782657  919036 main.go:141] libmachine: (ha-226508) DBG | domain ha-226508 has defined IP address 192.168.39.180 and MAC address 52:54:00:c2:06:c8 in network mk-ha-226508
	I0127 02:06:56.782767  919036 host.go:66] Checking if "ha-226508" exists ...
	I0127 02:06:56.783119  919036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 02:06:56.783178  919036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:06:56.797751  919036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35691
	I0127 02:06:56.798143  919036 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:06:56.798638  919036 main.go:141] libmachine: Using API Version  1
	I0127 02:06:56.798659  919036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:06:56.798996  919036 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:06:56.799268  919036 main.go:141] libmachine: (ha-226508) Calling .DriverName
	I0127 02:06:56.799479  919036 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 02:06:56.799517  919036 main.go:141] libmachine: (ha-226508) Calling .GetSSHHostname
	I0127 02:06:56.802318  919036 main.go:141] libmachine: (ha-226508) DBG | domain ha-226508 has defined MAC address 52:54:00:c2:06:c8 in network mk-ha-226508
	I0127 02:06:56.802729  919036 main.go:141] libmachine: (ha-226508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:06:c8", ip: ""} in network mk-ha-226508: {Iface:virbr1 ExpiryTime:2025-01-27 03:00:54 +0000 UTC Type:0 Mac:52:54:00:c2:06:c8 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-226508 Clientid:01:52:54:00:c2:06:c8}
	I0127 02:06:56.802757  919036 main.go:141] libmachine: (ha-226508) DBG | domain ha-226508 has defined IP address 192.168.39.180 and MAC address 52:54:00:c2:06:c8 in network mk-ha-226508
	I0127 02:06:56.802910  919036 main.go:141] libmachine: (ha-226508) Calling .GetSSHPort
	I0127 02:06:56.803084  919036 main.go:141] libmachine: (ha-226508) Calling .GetSSHKeyPath
	I0127 02:06:56.803219  919036 main.go:141] libmachine: (ha-226508) Calling .GetSSHUsername
	I0127 02:06:56.803331  919036 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/ha-226508/id_rsa Username:docker}
	I0127 02:06:56.884587  919036 ssh_runner.go:195] Run: systemctl --version
	I0127 02:06:56.891521  919036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 02:06:56.906661  919036 kubeconfig.go:125] found "ha-226508" server: "https://192.168.39.254:8443"
	I0127 02:06:56.906708  919036 api_server.go:166] Checking apiserver status ...
	I0127 02:06:56.906781  919036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 02:06:56.922177  919036 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1124/cgroup
	W0127 02:06:56.932456  919036 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1124/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0127 02:06:56.932511  919036 ssh_runner.go:195] Run: ls
	I0127 02:06:56.936813  919036 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0127 02:06:56.942908  919036 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0127 02:06:56.942937  919036 status.go:463] ha-226508 apiserver status = Running (err=<nil>)
	I0127 02:06:56.942950  919036 status.go:176] ha-226508 status: &{Name:ha-226508 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 02:06:56.942973  919036 status.go:174] checking status of ha-226508-m02 ...
	I0127 02:06:56.943281  919036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 02:06:56.943327  919036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:06:56.958801  919036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37367
	I0127 02:06:56.959351  919036 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:06:56.959826  919036 main.go:141] libmachine: Using API Version  1
	I0127 02:06:56.959852  919036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:06:56.960195  919036 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:06:56.960386  919036 main.go:141] libmachine: (ha-226508-m02) Calling .GetState
	I0127 02:06:56.962056  919036 status.go:371] ha-226508-m02 host status = "Stopped" (err=<nil>)
	I0127 02:06:56.962074  919036 status.go:384] host is not running, skipping remaining checks
	I0127 02:06:56.962082  919036 status.go:176] ha-226508-m02 status: &{Name:ha-226508-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 02:06:56.962151  919036 status.go:174] checking status of ha-226508-m03 ...
	I0127 02:06:56.962439  919036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 02:06:56.962487  919036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:06:56.977192  919036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36719
	I0127 02:06:56.977585  919036 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:06:56.978053  919036 main.go:141] libmachine: Using API Version  1
	I0127 02:06:56.978075  919036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:06:56.978377  919036 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:06:56.978584  919036 main.go:141] libmachine: (ha-226508-m03) Calling .GetState
	I0127 02:06:56.980314  919036 status.go:371] ha-226508-m03 host status = "Running" (err=<nil>)
	I0127 02:06:56.980333  919036 host.go:66] Checking if "ha-226508-m03" exists ...
	I0127 02:06:56.980630  919036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 02:06:56.980664  919036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:06:56.995255  919036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40761
	I0127 02:06:56.995671  919036 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:06:56.996257  919036 main.go:141] libmachine: Using API Version  1
	I0127 02:06:56.996289  919036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:06:56.996608  919036 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:06:56.996857  919036 main.go:141] libmachine: (ha-226508-m03) Calling .GetIP
	I0127 02:06:56.999550  919036 main.go:141] libmachine: (ha-226508-m03) DBG | domain ha-226508-m03 has defined MAC address 52:54:00:20:02:ec in network mk-ha-226508
	I0127 02:06:56.999964  919036 main.go:141] libmachine: (ha-226508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:02:ec", ip: ""} in network mk-ha-226508: {Iface:virbr1 ExpiryTime:2025-01-27 03:02:52 +0000 UTC Type:0 Mac:52:54:00:20:02:ec Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-226508-m03 Clientid:01:52:54:00:20:02:ec}
	I0127 02:06:56.999984  919036 main.go:141] libmachine: (ha-226508-m03) DBG | domain ha-226508-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:20:02:ec in network mk-ha-226508
	I0127 02:06:57.000162  919036 host.go:66] Checking if "ha-226508-m03" exists ...
	I0127 02:06:57.000462  919036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 02:06:57.000526  919036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:06:57.015260  919036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35797
	I0127 02:06:57.015642  919036 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:06:57.016137  919036 main.go:141] libmachine: Using API Version  1
	I0127 02:06:57.016166  919036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:06:57.016558  919036 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:06:57.016781  919036 main.go:141] libmachine: (ha-226508-m03) Calling .DriverName
	I0127 02:06:57.016974  919036 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 02:06:57.017000  919036 main.go:141] libmachine: (ha-226508-m03) Calling .GetSSHHostname
	I0127 02:06:57.019583  919036 main.go:141] libmachine: (ha-226508-m03) DBG | domain ha-226508-m03 has defined MAC address 52:54:00:20:02:ec in network mk-ha-226508
	I0127 02:06:57.019959  919036 main.go:141] libmachine: (ha-226508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:02:ec", ip: ""} in network mk-ha-226508: {Iface:virbr1 ExpiryTime:2025-01-27 03:02:52 +0000 UTC Type:0 Mac:52:54:00:20:02:ec Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-226508-m03 Clientid:01:52:54:00:20:02:ec}
	I0127 02:06:57.019991  919036 main.go:141] libmachine: (ha-226508-m03) DBG | domain ha-226508-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:20:02:ec in network mk-ha-226508
	I0127 02:06:57.020105  919036 main.go:141] libmachine: (ha-226508-m03) Calling .GetSSHPort
	I0127 02:06:57.020293  919036 main.go:141] libmachine: (ha-226508-m03) Calling .GetSSHKeyPath
	I0127 02:06:57.020406  919036 main.go:141] libmachine: (ha-226508-m03) Calling .GetSSHUsername
	I0127 02:06:57.020518  919036 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/ha-226508-m03/id_rsa Username:docker}
	I0127 02:06:57.098549  919036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 02:06:57.115496  919036 kubeconfig.go:125] found "ha-226508" server: "https://192.168.39.254:8443"
	I0127 02:06:57.115534  919036 api_server.go:166] Checking apiserver status ...
	I0127 02:06:57.115567  919036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 02:06:57.130244  919036 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1479/cgroup
	W0127 02:06:57.139105  919036 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1479/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0127 02:06:57.139178  919036 ssh_runner.go:195] Run: ls
	I0127 02:06:57.143165  919036 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0127 02:06:57.147865  919036 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0127 02:06:57.147889  919036 status.go:463] ha-226508-m03 apiserver status = Running (err=<nil>)
	I0127 02:06:57.147897  919036 status.go:176] ha-226508-m03 status: &{Name:ha-226508-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 02:06:57.147917  919036 status.go:174] checking status of ha-226508-m04 ...
	I0127 02:06:57.148216  919036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 02:06:57.148261  919036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:06:57.163497  919036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41719
	I0127 02:06:57.164082  919036 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:06:57.164609  919036 main.go:141] libmachine: Using API Version  1
	I0127 02:06:57.164631  919036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:06:57.165041  919036 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:06:57.165275  919036 main.go:141] libmachine: (ha-226508-m04) Calling .GetState
	I0127 02:06:57.167014  919036 status.go:371] ha-226508-m04 host status = "Running" (err=<nil>)
	I0127 02:06:57.167035  919036 host.go:66] Checking if "ha-226508-m04" exists ...
	I0127 02:06:57.167329  919036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 02:06:57.167369  919036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:06:57.182644  919036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41145
	I0127 02:06:57.183163  919036 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:06:57.183626  919036 main.go:141] libmachine: Using API Version  1
	I0127 02:06:57.183649  919036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:06:57.183993  919036 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:06:57.184197  919036 main.go:141] libmachine: (ha-226508-m04) Calling .GetIP
	I0127 02:06:57.187193  919036 main.go:141] libmachine: (ha-226508-m04) DBG | domain ha-226508-m04 has defined MAC address 52:54:00:80:e5:4c in network mk-ha-226508
	I0127 02:06:57.187711  919036 main.go:141] libmachine: (ha-226508-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e5:4c", ip: ""} in network mk-ha-226508: {Iface:virbr1 ExpiryTime:2025-01-27 03:04:18 +0000 UTC Type:0 Mac:52:54:00:80:e5:4c Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-226508-m04 Clientid:01:52:54:00:80:e5:4c}
	I0127 02:06:57.187741  919036 main.go:141] libmachine: (ha-226508-m04) DBG | domain ha-226508-m04 has defined IP address 192.168.39.159 and MAC address 52:54:00:80:e5:4c in network mk-ha-226508
	I0127 02:06:57.187971  919036 host.go:66] Checking if "ha-226508-m04" exists ...
	I0127 02:06:57.188287  919036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 02:06:57.188329  919036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:06:57.203576  919036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46531
	I0127 02:06:57.204124  919036 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:06:57.204683  919036 main.go:141] libmachine: Using API Version  1
	I0127 02:06:57.204713  919036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:06:57.205140  919036 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:06:57.205355  919036 main.go:141] libmachine: (ha-226508-m04) Calling .DriverName
	I0127 02:06:57.205541  919036 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 02:06:57.205568  919036 main.go:141] libmachine: (ha-226508-m04) Calling .GetSSHHostname
	I0127 02:06:57.208378  919036 main.go:141] libmachine: (ha-226508-m04) DBG | domain ha-226508-m04 has defined MAC address 52:54:00:80:e5:4c in network mk-ha-226508
	I0127 02:06:57.208841  919036 main.go:141] libmachine: (ha-226508-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e5:4c", ip: ""} in network mk-ha-226508: {Iface:virbr1 ExpiryTime:2025-01-27 03:04:18 +0000 UTC Type:0 Mac:52:54:00:80:e5:4c Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-226508-m04 Clientid:01:52:54:00:80:e5:4c}
	I0127 02:06:57.208885  919036 main.go:141] libmachine: (ha-226508-m04) DBG | domain ha-226508-m04 has defined IP address 192.168.39.159 and MAC address 52:54:00:80:e5:4c in network mk-ha-226508
	I0127 02:06:57.208991  919036 main.go:141] libmachine: (ha-226508-m04) Calling .GetSSHPort
	I0127 02:06:57.209181  919036 main.go:141] libmachine: (ha-226508-m04) Calling .GetSSHKeyPath
	I0127 02:06:57.209339  919036 main.go:141] libmachine: (ha-226508-m04) Calling .GetSSHUsername
	I0127 02:06:57.209480  919036 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/ha-226508-m04/id_rsa Username:docker}
	I0127 02:06:57.296957  919036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 02:06:57.313008  919036 status.go:176] ha-226508-m04 status: &{Name:ha-226508-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (52.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 node start m02 -v=7 --alsologtostderr
E0127 02:07:32.426340  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/functional-308251/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-226508 node start m02 -v=7 --alsologtostderr: (51.93916445s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (52.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (431.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-226508 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-226508 -v=7 --alsologtostderr
E0127 02:09:48.567536  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/functional-308251/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:10:16.268461  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/functional-308251/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:11:33.566061  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-226508 -v=7 --alsologtostderr: (4m34.034431108s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-226508 --wait=true -v=7 --alsologtostderr
E0127 02:12:56.642050  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:14:48.567249  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/functional-308251/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-226508 --wait=true -v=7 --alsologtostderr: (2m37.358318781s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-226508
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (431.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-226508 node delete m03 -v=7 --alsologtostderr: (17.431943027s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (272.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 stop -v=7 --alsologtostderr
E0127 02:16:33.566053  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:19:48.566728  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/functional-308251/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-226508 stop -v=7 --alsologtostderr: (4m32.823016535s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-226508 status -v=7 --alsologtostderr: exit status 7 (113.17246ms)

                                                
                                                
-- stdout --
	ha-226508
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-226508-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-226508-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 02:19:54.870743  923636 out.go:345] Setting OutFile to fd 1 ...
	I0127 02:19:54.871120  923636 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:19:54.871174  923636 out.go:358] Setting ErrFile to fd 2...
	I0127 02:19:54.871192  923636 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:19:54.871659  923636 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-897624/.minikube/bin
	I0127 02:19:54.872107  923636 out.go:352] Setting JSON to false
	I0127 02:19:54.872153  923636 mustload.go:65] Loading cluster: ha-226508
	I0127 02:19:54.872309  923636 notify.go:220] Checking for updates...
	I0127 02:19:54.873036  923636 config.go:182] Loaded profile config "ha-226508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 02:19:54.873084  923636 status.go:174] checking status of ha-226508 ...
	I0127 02:19:54.873548  923636 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 02:19:54.873593  923636 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:19:54.889256  923636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45645
	I0127 02:19:54.889678  923636 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:19:54.890302  923636 main.go:141] libmachine: Using API Version  1
	I0127 02:19:54.890337  923636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:19:54.890766  923636 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:19:54.891062  923636 main.go:141] libmachine: (ha-226508) Calling .GetState
	I0127 02:19:54.892792  923636 status.go:371] ha-226508 host status = "Stopped" (err=<nil>)
	I0127 02:19:54.892813  923636 status.go:384] host is not running, skipping remaining checks
	I0127 02:19:54.892821  923636 status.go:176] ha-226508 status: &{Name:ha-226508 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 02:19:54.892969  923636 status.go:174] checking status of ha-226508-m02 ...
	I0127 02:19:54.893321  923636 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 02:19:54.893373  923636 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:19:54.909164  923636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35599
	I0127 02:19:54.909718  923636 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:19:54.910280  923636 main.go:141] libmachine: Using API Version  1
	I0127 02:19:54.910307  923636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:19:54.910651  923636 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:19:54.910836  923636 main.go:141] libmachine: (ha-226508-m02) Calling .GetState
	I0127 02:19:54.912227  923636 status.go:371] ha-226508-m02 host status = "Stopped" (err=<nil>)
	I0127 02:19:54.912237  923636 status.go:384] host is not running, skipping remaining checks
	I0127 02:19:54.912243  923636 status.go:176] ha-226508-m02 status: &{Name:ha-226508-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 02:19:54.912266  923636 status.go:174] checking status of ha-226508-m04 ...
	I0127 02:19:54.912580  923636 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 02:19:54.912622  923636 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:19:54.928659  923636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33993
	I0127 02:19:54.929191  923636 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:19:54.929721  923636 main.go:141] libmachine: Using API Version  1
	I0127 02:19:54.929742  923636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:19:54.930159  923636 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:19:54.930382  923636 main.go:141] libmachine: (ha-226508-m04) Calling .GetState
	I0127 02:19:54.931854  923636 status.go:371] ha-226508-m04 host status = "Stopped" (err=<nil>)
	I0127 02:19:54.931871  923636 status.go:384] host is not running, skipping remaining checks
	I0127 02:19:54.931879  923636 status.go:176] ha-226508-m04 status: &{Name:ha-226508-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (272.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (112.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-226508 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0127 02:21:11.629989  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/functional-308251/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:21:33.566757  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-226508 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m51.265221823s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (112.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (76.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-226508 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-226508 --control-plane -v=7 --alsologtostderr: (1m15.844750581s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-226508 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (76.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.85s)

                                                
                                    
x
+
TestJSONOutput/start/Command (76.16s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-843592 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-843592 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m16.161332686s)
--- PASS: TestJSONOutput/start/Command (76.16s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-843592 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-843592 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.36s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-843592 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-843592 --output=json --user=testUser: (7.36169809s)
--- PASS: TestJSONOutput/stop/Command (7.36s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-851932 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-851932 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (71.890047ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"53b10767-bbdf-40e7-b039-cee0e9b0f82c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-851932] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0bf61433-f618-4aef-bd47-a31698568005","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20316"}}
	{"specversion":"1.0","id":"a6ff8e71-c992-4eff-8190-d6179922b726","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"76b1245e-0343-4a90-ba1d-0796cc6fcc88","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20316-897624/kubeconfig"}}
	{"specversion":"1.0","id":"c9ac16fb-0e6a-4370-af69-0fdd734fb80e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-897624/.minikube"}}
	{"specversion":"1.0","id":"72360df1-8917-4607-9aa5-6afebb39a8fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"71143b11-7103-47c3-a05e-ab8005bb4db0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"acd785ca-e9fc-404e-9295-1be3bc95abbc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-851932" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-851932
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (85.3s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-120529 --driver=kvm2  --container-runtime=crio
E0127 02:24:48.570746  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/functional-308251/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-120529 --driver=kvm2  --container-runtime=crio: (39.236103459s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-142580 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-142580 --driver=kvm2  --container-runtime=crio: (42.936741346s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-120529
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-142580
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-142580" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-142580
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-142580: (1.026322481s)
helpers_test.go:175: Cleaning up "first-120529" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-120529
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-120529: (1.018077537s)
--- PASS: TestMinikubeProfile (85.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (24.78s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-039242 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-039242 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.776264649s)
--- PASS: TestMountStart/serial/StartWithMountFirst (24.78s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-039242 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-039242 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.62s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-057615 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0127 02:26:33.569121  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-057615 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.620431284s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-057615 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-057615 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-039242 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-057615 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-057615 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-057615
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-057615: (1.274418153s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.61s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-057615
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-057615: (22.612826424s)
--- PASS: TestMountStart/serial/RestartStopped (23.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-057615 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-057615 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (137.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-207207 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0127 02:29:36.643624  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-207207 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m16.909455553s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-207207 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (137.33s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-207207 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-207207 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-207207 -- rollout status deployment/busybox: (4.020533971s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-207207 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-207207 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-207207 -- exec busybox-58667487b6-cmhxs -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-207207 -- exec busybox-58667487b6-qpmmd -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-207207 -- exec busybox-58667487b6-cmhxs -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-207207 -- exec busybox-58667487b6-qpmmd -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-207207 -- exec busybox-58667487b6-cmhxs -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-207207 -- exec busybox-58667487b6-qpmmd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.50s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-207207 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-207207 -- exec busybox-58667487b6-cmhxs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-207207 -- exec busybox-58667487b6-cmhxs -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-207207 -- exec busybox-58667487b6-qpmmd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-207207 -- exec busybox-58667487b6-qpmmd -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (50.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-207207 -v 3 --alsologtostderr
E0127 02:29:48.567040  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/functional-308251/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-207207 -v 3 --alsologtostderr: (49.91802469s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-207207 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (50.49s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-207207 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.60s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-207207 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-207207 cp testdata/cp-test.txt multinode-207207:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-207207 ssh -n multinode-207207 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-207207 cp multinode-207207:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3202673883/001/cp-test_multinode-207207.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-207207 ssh -n multinode-207207 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-207207 cp multinode-207207:/home/docker/cp-test.txt multinode-207207-m02:/home/docker/cp-test_multinode-207207_multinode-207207-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-207207 ssh -n multinode-207207 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-207207 ssh -n multinode-207207-m02 "sudo cat /home/docker/cp-test_multinode-207207_multinode-207207-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-207207 cp multinode-207207:/home/docker/cp-test.txt multinode-207207-m03:/home/docker/cp-test_multinode-207207_multinode-207207-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-207207 ssh -n multinode-207207 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-207207 ssh -n multinode-207207-m03 "sudo cat /home/docker/cp-test_multinode-207207_multinode-207207-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-207207 cp testdata/cp-test.txt multinode-207207-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-207207 ssh -n multinode-207207-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-207207 cp multinode-207207-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3202673883/001/cp-test_multinode-207207-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-207207 ssh -n multinode-207207-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-207207 cp multinode-207207-m02:/home/docker/cp-test.txt multinode-207207:/home/docker/cp-test_multinode-207207-m02_multinode-207207.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-207207 ssh -n multinode-207207-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-207207 ssh -n multinode-207207 "sudo cat /home/docker/cp-test_multinode-207207-m02_multinode-207207.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-207207 cp multinode-207207-m02:/home/docker/cp-test.txt multinode-207207-m03:/home/docker/cp-test_multinode-207207-m02_multinode-207207-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-207207 ssh -n multinode-207207-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-207207 ssh -n multinode-207207-m03 "sudo cat /home/docker/cp-test_multinode-207207-m02_multinode-207207-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-207207 cp testdata/cp-test.txt multinode-207207-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-207207 ssh -n multinode-207207-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-207207 cp multinode-207207-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3202673883/001/cp-test_multinode-207207-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-207207 ssh -n multinode-207207-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-207207 cp multinode-207207-m03:/home/docker/cp-test.txt multinode-207207:/home/docker/cp-test_multinode-207207-m03_multinode-207207.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-207207 ssh -n multinode-207207-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-207207 ssh -n multinode-207207 "sudo cat /home/docker/cp-test_multinode-207207-m03_multinode-207207.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-207207 cp multinode-207207-m03:/home/docker/cp-test.txt multinode-207207-m02:/home/docker/cp-test_multinode-207207-m03_multinode-207207-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-207207 ssh -n multinode-207207-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-207207 ssh -n multinode-207207-m02 "sudo cat /home/docker/cp-test_multinode-207207-m03_multinode-207207-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.38s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-207207 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-207207 node stop m03: (1.427797178s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-207207 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-207207 status: exit status 7 (429.86091ms)

                                                
                                                
-- stdout --
	multinode-207207
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-207207-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-207207-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-207207 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-207207 status --alsologtostderr: exit status 7 (431.87846ms)

                                                
                                                
-- stdout --
	multinode-207207
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-207207-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-207207-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 02:30:44.895770  931487 out.go:345] Setting OutFile to fd 1 ...
	I0127 02:30:44.895925  931487 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:30:44.895937  931487 out.go:358] Setting ErrFile to fd 2...
	I0127 02:30:44.895941  931487 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:30:44.896154  931487 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-897624/.minikube/bin
	I0127 02:30:44.896369  931487 out.go:352] Setting JSON to false
	I0127 02:30:44.896400  931487 mustload.go:65] Loading cluster: multinode-207207
	I0127 02:30:44.896461  931487 notify.go:220] Checking for updates...
	I0127 02:30:44.896895  931487 config.go:182] Loaded profile config "multinode-207207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 02:30:44.896919  931487 status.go:174] checking status of multinode-207207 ...
	I0127 02:30:44.897376  931487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 02:30:44.897423  931487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:30:44.916346  931487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41697
	I0127 02:30:44.916823  931487 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:30:44.917431  931487 main.go:141] libmachine: Using API Version  1
	I0127 02:30:44.917468  931487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:30:44.917907  931487 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:30:44.918108  931487 main.go:141] libmachine: (multinode-207207) Calling .GetState
	I0127 02:30:44.919959  931487 status.go:371] multinode-207207 host status = "Running" (err=<nil>)
	I0127 02:30:44.919983  931487 host.go:66] Checking if "multinode-207207" exists ...
	I0127 02:30:44.920285  931487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 02:30:44.920332  931487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:30:44.936297  931487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36345
	I0127 02:30:44.936754  931487 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:30:44.937335  931487 main.go:141] libmachine: Using API Version  1
	I0127 02:30:44.937360  931487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:30:44.937717  931487 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:30:44.937948  931487 main.go:141] libmachine: (multinode-207207) Calling .GetIP
	I0127 02:30:44.940704  931487 main.go:141] libmachine: (multinode-207207) DBG | domain multinode-207207 has defined MAC address 52:54:00:0c:c4:2d in network mk-multinode-207207
	I0127 02:30:44.941238  931487 main.go:141] libmachine: (multinode-207207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:c4:2d", ip: ""} in network mk-multinode-207207: {Iface:virbr1 ExpiryTime:2025-01-27 03:27:35 +0000 UTC Type:0 Mac:52:54:00:0c:c4:2d Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:multinode-207207 Clientid:01:52:54:00:0c:c4:2d}
	I0127 02:30:44.941273  931487 main.go:141] libmachine: (multinode-207207) DBG | domain multinode-207207 has defined IP address 192.168.39.41 and MAC address 52:54:00:0c:c4:2d in network mk-multinode-207207
	I0127 02:30:44.941424  931487 host.go:66] Checking if "multinode-207207" exists ...
	I0127 02:30:44.941878  931487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 02:30:44.941948  931487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:30:44.957933  931487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38733
	I0127 02:30:44.958391  931487 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:30:44.958904  931487 main.go:141] libmachine: Using API Version  1
	I0127 02:30:44.958930  931487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:30:44.959275  931487 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:30:44.959485  931487 main.go:141] libmachine: (multinode-207207) Calling .DriverName
	I0127 02:30:44.959682  931487 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 02:30:44.959718  931487 main.go:141] libmachine: (multinode-207207) Calling .GetSSHHostname
	I0127 02:30:44.962444  931487 main.go:141] libmachine: (multinode-207207) DBG | domain multinode-207207 has defined MAC address 52:54:00:0c:c4:2d in network mk-multinode-207207
	I0127 02:30:44.962917  931487 main.go:141] libmachine: (multinode-207207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:c4:2d", ip: ""} in network mk-multinode-207207: {Iface:virbr1 ExpiryTime:2025-01-27 03:27:35 +0000 UTC Type:0 Mac:52:54:00:0c:c4:2d Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:multinode-207207 Clientid:01:52:54:00:0c:c4:2d}
	I0127 02:30:44.962947  931487 main.go:141] libmachine: (multinode-207207) DBG | domain multinode-207207 has defined IP address 192.168.39.41 and MAC address 52:54:00:0c:c4:2d in network mk-multinode-207207
	I0127 02:30:44.963095  931487 main.go:141] libmachine: (multinode-207207) Calling .GetSSHPort
	I0127 02:30:44.963250  931487 main.go:141] libmachine: (multinode-207207) Calling .GetSSHKeyPath
	I0127 02:30:44.963383  931487 main.go:141] libmachine: (multinode-207207) Calling .GetSSHUsername
	I0127 02:30:44.963502  931487 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/multinode-207207/id_rsa Username:docker}
	I0127 02:30:45.047938  931487 ssh_runner.go:195] Run: systemctl --version
	I0127 02:30:45.053938  931487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 02:30:45.068861  931487 kubeconfig.go:125] found "multinode-207207" server: "https://192.168.39.41:8443"
	I0127 02:30:45.068905  931487 api_server.go:166] Checking apiserver status ...
	I0127 02:30:45.068973  931487 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 02:30:45.082505  931487 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1060/cgroup
	W0127 02:30:45.092279  931487 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1060/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0127 02:30:45.092337  931487 ssh_runner.go:195] Run: ls
	I0127 02:30:45.096851  931487 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I0127 02:30:45.101756  931487 api_server.go:279] https://192.168.39.41:8443/healthz returned 200:
	ok
	I0127 02:30:45.101783  931487 status.go:463] multinode-207207 apiserver status = Running (err=<nil>)
	I0127 02:30:45.101792  931487 status.go:176] multinode-207207 status: &{Name:multinode-207207 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 02:30:45.101832  931487 status.go:174] checking status of multinode-207207-m02 ...
	I0127 02:30:45.102146  931487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 02:30:45.102182  931487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:30:45.118126  931487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35363
	I0127 02:30:45.118560  931487 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:30:45.118994  931487 main.go:141] libmachine: Using API Version  1
	I0127 02:30:45.119028  931487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:30:45.119373  931487 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:30:45.119583  931487 main.go:141] libmachine: (multinode-207207-m02) Calling .GetState
	I0127 02:30:45.121389  931487 status.go:371] multinode-207207-m02 host status = "Running" (err=<nil>)
	I0127 02:30:45.121406  931487 host.go:66] Checking if "multinode-207207-m02" exists ...
	I0127 02:30:45.121699  931487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 02:30:45.121742  931487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:30:45.137616  931487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46625
	I0127 02:30:45.138110  931487 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:30:45.138656  931487 main.go:141] libmachine: Using API Version  1
	I0127 02:30:45.138682  931487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:30:45.139003  931487 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:30:45.139190  931487 main.go:141] libmachine: (multinode-207207-m02) Calling .GetIP
	I0127 02:30:45.141915  931487 main.go:141] libmachine: (multinode-207207-m02) DBG | domain multinode-207207-m02 has defined MAC address 52:54:00:53:f6:96 in network mk-multinode-207207
	I0127 02:30:45.142396  931487 main.go:141] libmachine: (multinode-207207-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:f6:96", ip: ""} in network mk-multinode-207207: {Iface:virbr1 ExpiryTime:2025-01-27 03:29:02 +0000 UTC Type:0 Mac:52:54:00:53:f6:96 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-207207-m02 Clientid:01:52:54:00:53:f6:96}
	I0127 02:30:45.142428  931487 main.go:141] libmachine: (multinode-207207-m02) DBG | domain multinode-207207-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:53:f6:96 in network mk-multinode-207207
	I0127 02:30:45.142587  931487 host.go:66] Checking if "multinode-207207-m02" exists ...
	I0127 02:30:45.142910  931487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 02:30:45.142952  931487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:30:45.159139  931487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33617
	I0127 02:30:45.159657  931487 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:30:45.160171  931487 main.go:141] libmachine: Using API Version  1
	I0127 02:30:45.160198  931487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:30:45.160502  931487 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:30:45.160656  931487 main.go:141] libmachine: (multinode-207207-m02) Calling .DriverName
	I0127 02:30:45.160823  931487 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 02:30:45.160849  931487 main.go:141] libmachine: (multinode-207207-m02) Calling .GetSSHHostname
	I0127 02:30:45.163756  931487 main.go:141] libmachine: (multinode-207207-m02) DBG | domain multinode-207207-m02 has defined MAC address 52:54:00:53:f6:96 in network mk-multinode-207207
	I0127 02:30:45.164144  931487 main.go:141] libmachine: (multinode-207207-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:f6:96", ip: ""} in network mk-multinode-207207: {Iface:virbr1 ExpiryTime:2025-01-27 03:29:02 +0000 UTC Type:0 Mac:52:54:00:53:f6:96 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-207207-m02 Clientid:01:52:54:00:53:f6:96}
	I0127 02:30:45.164174  931487 main.go:141] libmachine: (multinode-207207-m02) DBG | domain multinode-207207-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:53:f6:96 in network mk-multinode-207207
	I0127 02:30:45.164280  931487 main.go:141] libmachine: (multinode-207207-m02) Calling .GetSSHPort
	I0127 02:30:45.164417  931487 main.go:141] libmachine: (multinode-207207-m02) Calling .GetSSHKeyPath
	I0127 02:30:45.164589  931487 main.go:141] libmachine: (multinode-207207-m02) Calling .GetSSHUsername
	I0127 02:30:45.164698  931487 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-897624/.minikube/machines/multinode-207207-m02/id_rsa Username:docker}
	I0127 02:30:45.243546  931487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 02:30:45.257125  931487 status.go:176] multinode-207207-m02 status: &{Name:multinode-207207-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0127 02:30:45.257169  931487 status.go:174] checking status of multinode-207207-m03 ...
	I0127 02:30:45.257533  931487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 02:30:45.257588  931487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:30:45.273744  931487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35329
	I0127 02:30:45.274225  931487 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:30:45.274742  931487 main.go:141] libmachine: Using API Version  1
	I0127 02:30:45.274778  931487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:30:45.275073  931487 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:30:45.275315  931487 main.go:141] libmachine: (multinode-207207-m03) Calling .GetState
	I0127 02:30:45.276995  931487 status.go:371] multinode-207207-m03 host status = "Stopped" (err=<nil>)
	I0127 02:30:45.277010  931487 status.go:384] host is not running, skipping remaining checks
	I0127 02:30:45.277017  931487 status.go:176] multinode-207207-m03 status: &{Name:multinode-207207-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.29s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-207207 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-207207 node start m03 -v=7 --alsologtostderr: (38.343191384s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-207207 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.98s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (327.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-207207
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-207207
E0127 02:31:33.569114  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-207207: (3m2.856141424s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-207207 --wait=true -v=8 --alsologtostderr
E0127 02:34:48.566991  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/functional-308251/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:36:33.566980  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-207207 --wait=true -v=8 --alsologtostderr: (2m24.100573664s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-207207
--- PASS: TestMultiNode/serial/RestartKeepsNodes (327.06s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-207207 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-207207 node delete m03: (2.096729929s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-207207 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.64s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (182.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-207207 stop
E0127 02:37:51.634339  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/functional-308251/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:39:48.567210  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/functional-308251/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-207207 stop: (3m1.895901481s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-207207 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-207207 status: exit status 7 (95.741668ms)

                                                
                                                
-- stdout --
	multinode-207207
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-207207-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-207207 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-207207 status --alsologtostderr: exit status 7 (88.924585ms)

                                                
                                                
-- stdout --
	multinode-207207
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-207207-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 02:39:56.008354  934435 out.go:345] Setting OutFile to fd 1 ...
	I0127 02:39:56.008475  934435 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:39:56.008487  934435 out.go:358] Setting ErrFile to fd 2...
	I0127 02:39:56.008493  934435 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:39:56.008668  934435 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-897624/.minikube/bin
	I0127 02:39:56.008850  934435 out.go:352] Setting JSON to false
	I0127 02:39:56.008883  934435 mustload.go:65] Loading cluster: multinode-207207
	I0127 02:39:56.009007  934435 notify.go:220] Checking for updates...
	I0127 02:39:56.009322  934435 config.go:182] Loaded profile config "multinode-207207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 02:39:56.009347  934435 status.go:174] checking status of multinode-207207 ...
	I0127 02:39:56.009786  934435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 02:39:56.009893  934435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:39:56.024807  934435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36899
	I0127 02:39:56.025324  934435 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:39:56.025868  934435 main.go:141] libmachine: Using API Version  1
	I0127 02:39:56.025907  934435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:39:56.026270  934435 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:39:56.026463  934435 main.go:141] libmachine: (multinode-207207) Calling .GetState
	I0127 02:39:56.028178  934435 status.go:371] multinode-207207 host status = "Stopped" (err=<nil>)
	I0127 02:39:56.028199  934435 status.go:384] host is not running, skipping remaining checks
	I0127 02:39:56.028207  934435 status.go:176] multinode-207207 status: &{Name:multinode-207207 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 02:39:56.028243  934435 status.go:174] checking status of multinode-207207-m02 ...
	I0127 02:39:56.028520  934435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 02:39:56.028557  934435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 02:39:56.043611  934435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43965
	I0127 02:39:56.044021  934435 main.go:141] libmachine: () Calling .GetVersion
	I0127 02:39:56.044522  934435 main.go:141] libmachine: Using API Version  1
	I0127 02:39:56.044547  934435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 02:39:56.044881  934435 main.go:141] libmachine: () Calling .GetMachineName
	I0127 02:39:56.045089  934435 main.go:141] libmachine: (multinode-207207-m02) Calling .GetState
	I0127 02:39:56.046646  934435 status.go:371] multinode-207207-m02 host status = "Stopped" (err=<nil>)
	I0127 02:39:56.046661  934435 status.go:384] host is not running, skipping remaining checks
	I0127 02:39:56.046667  934435 status.go:176] multinode-207207-m02 status: &{Name:multinode-207207-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (182.08s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (98.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-207207 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0127 02:41:33.566934  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-207207 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m37.891345763s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-207207 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (98.43s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (40.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-207207
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-207207-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-207207-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (66.322579ms)

                                                
                                                
-- stdout --
	* [multinode-207207-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20316
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20316-897624/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-897624/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-207207-m02' is duplicated with machine name 'multinode-207207-m02' in profile 'multinode-207207'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-207207-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-207207-m03 --driver=kvm2  --container-runtime=crio: (39.65357238s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-207207
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-207207: exit status 80 (212.807187ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-207207 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-207207-m03 already exists in multinode-207207-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-207207-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (40.79s)

                                                
                                    
x
+
TestScheduledStopUnix (115.31s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-739127 --memory=2048 --driver=kvm2  --container-runtime=crio
E0127 02:46:16.647367  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:46:33.567061  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-739127 --memory=2048 --driver=kvm2  --container-runtime=crio: (43.61960136s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-739127 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-739127 -n scheduled-stop-739127
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-739127 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0127 02:46:48.159916  904889 retry.go:31] will retry after 101.255µs: open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/scheduled-stop-739127/pid: no such file or directory
I0127 02:46:48.161101  904889 retry.go:31] will retry after 89.069µs: open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/scheduled-stop-739127/pid: no such file or directory
I0127 02:46:48.162258  904889 retry.go:31] will retry after 136.686µs: open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/scheduled-stop-739127/pid: no such file or directory
I0127 02:46:48.163415  904889 retry.go:31] will retry after 209.652µs: open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/scheduled-stop-739127/pid: no such file or directory
I0127 02:46:48.164582  904889 retry.go:31] will retry after 313.155µs: open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/scheduled-stop-739127/pid: no such file or directory
I0127 02:46:48.165712  904889 retry.go:31] will retry after 1.068775ms: open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/scheduled-stop-739127/pid: no such file or directory
I0127 02:46:48.166871  904889 retry.go:31] will retry after 1.016493ms: open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/scheduled-stop-739127/pid: no such file or directory
I0127 02:46:48.167999  904889 retry.go:31] will retry after 1.166863ms: open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/scheduled-stop-739127/pid: no such file or directory
I0127 02:46:48.170227  904889 retry.go:31] will retry after 3.529689ms: open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/scheduled-stop-739127/pid: no such file or directory
I0127 02:46:48.174473  904889 retry.go:31] will retry after 2.080232ms: open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/scheduled-stop-739127/pid: no such file or directory
I0127 02:46:48.177252  904889 retry.go:31] will retry after 5.738139ms: open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/scheduled-stop-739127/pid: no such file or directory
I0127 02:46:48.183523  904889 retry.go:31] will retry after 5.248762ms: open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/scheduled-stop-739127/pid: no such file or directory
I0127 02:46:48.189872  904889 retry.go:31] will retry after 14.227478ms: open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/scheduled-stop-739127/pid: no such file or directory
I0127 02:46:48.205184  904889 retry.go:31] will retry after 17.949409ms: open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/scheduled-stop-739127/pid: no such file or directory
I0127 02:46:48.223522  904889 retry.go:31] will retry after 16.37608ms: open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/scheduled-stop-739127/pid: no such file or directory
I0127 02:46:48.240771  904889 retry.go:31] will retry after 58.579144ms: open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/scheduled-stop-739127/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-739127 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-739127 -n scheduled-stop-739127
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-739127
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-739127 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-739127
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-739127: exit status 7 (77.487086ms)

                                                
                                                
-- stdout --
	scheduled-stop-739127
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-739127 -n scheduled-stop-739127
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-739127 -n scheduled-stop-739127: exit status 7 (66.900036ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-739127" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-739127
--- PASS: TestScheduledStopUnix (115.31s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (202.27s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3683032917 start -p running-upgrade-078958 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3683032917 start -p running-upgrade-078958 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m4.295219532s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-078958 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-078958 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m14.021738967s)
helpers_test.go:175: Cleaning up "running-upgrade-078958" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-078958
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-078958: (1.163240445s)
--- PASS: TestRunningBinaryUpgrade (202.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-954952 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-954952 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (79.760673ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-954952] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20316
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20316-897624/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-897624/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestPause/serial/Start (106.34s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-622238 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-622238 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m46.344311235s)
--- PASS: TestPause/serial/Start (106.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (93.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-954952 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-954952 --driver=kvm2  --container-runtime=crio: (1m33.033793498s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-954952 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (93.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.36s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (149.29s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3949154097 start -p stopped-upgrade-883403 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3949154097 start -p stopped-upgrade-883403 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m31.749712031s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3949154097 -p stopped-upgrade-883403 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3949154097 -p stopped-upgrade-883403 stop: (2.35547924s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-883403 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-883403 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (55.184549123s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (149.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (37.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-954952 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-954952 --no-kubernetes --driver=kvm2  --container-runtime=crio: (36.365158297s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-954952 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-954952 status -o json: exit status 2 (246.823006ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-954952","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-954952
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-954952: (1.293047777s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (37.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (36.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-954952 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-954952 --no-kubernetes --driver=kvm2  --container-runtime=crio: (36.663469892s)
--- PASS: TestNoKubernetes/serial/Start (36.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-954952 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-954952 "sudo systemctl is-active --quiet service kubelet": exit status 1 (220.263792ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (29.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (15.516615431s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (13.985046653s)
--- PASS: TestNoKubernetes/serial/ProfileList (29.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-954952
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-954952: (1.414622403s)
--- PASS: TestNoKubernetes/serial/Stop (1.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (32.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-954952 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-954952 --driver=kvm2  --container-runtime=crio: (32.889469361s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (32.89s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.77s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-883403
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-284111 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-284111 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (119.772877ms)

                                                
                                                
-- stdout --
	* [false-284111] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20316
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20316-897624/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-897624/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 02:51:31.847919  942355 out.go:345] Setting OutFile to fd 1 ...
	I0127 02:51:31.848055  942355 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:51:31.848065  942355 out.go:358] Setting ErrFile to fd 2...
	I0127 02:51:31.848071  942355 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:51:31.848287  942355 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-897624/.minikube/bin
	I0127 02:51:31.848919  942355 out.go:352] Setting JSON to false
	I0127 02:51:31.850017  942355 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":12835,"bootTime":1737933457,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 02:51:31.850138  942355 start.go:139] virtualization: kvm guest
	I0127 02:51:31.852275  942355 out.go:177] * [false-284111] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 02:51:31.853702  942355 out.go:177]   - MINIKUBE_LOCATION=20316
	I0127 02:51:31.853696  942355 notify.go:220] Checking for updates...
	I0127 02:51:31.856125  942355 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 02:51:31.857292  942355 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20316-897624/kubeconfig
	I0127 02:51:31.858556  942355 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-897624/.minikube
	I0127 02:51:31.859677  942355 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 02:51:31.860871  942355 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 02:51:31.862366  942355 config.go:182] Loaded profile config "NoKubernetes-954952": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0127 02:51:31.862500  942355 config.go:182] Loaded profile config "force-systemd-env-096099": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 02:51:31.862616  942355 config.go:182] Loaded profile config "kubernetes-upgrade-080871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 02:51:31.862732  942355 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 02:51:31.903262  942355 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 02:51:31.904426  942355 start.go:297] selected driver: kvm2
	I0127 02:51:31.904463  942355 start.go:901] validating driver "kvm2" against <nil>
	I0127 02:51:31.904475  942355 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 02:51:31.906485  942355 out.go:201] 
	W0127 02:51:31.907855  942355 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0127 02:51:31.909157  942355 out.go:201] 

                                                
                                                
** /stderr **
E0127 02:51:33.566748  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:88: 
----------------------- debugLogs start: false-284111 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-284111

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-284111

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-284111

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-284111

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-284111

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-284111

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-284111

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-284111

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-284111

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-284111

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284111"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284111"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284111"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-284111

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284111"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284111"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-284111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-284111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-284111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-284111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-284111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-284111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-284111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-284111" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284111"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284111"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284111"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284111"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284111"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-284111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-284111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-284111" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284111"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284111"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284111"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284111"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284111"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-284111

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284111"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284111"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284111"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284111"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284111"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284111"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284111"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284111"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284111"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284111"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284111"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284111"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284111"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284111"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284111"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284111"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284111"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284111"

                                                
                                                
----------------------- debugLogs end: false-284111 [took: 3.643316553s] --------------------------------
helpers_test.go:175: Cleaning up "false-284111" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-284111
--- PASS: TestNetworkPlugins/group/false (3.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-954952 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-954952 "sudo systemctl is-active --quiet service kubelet": exit status 1 (199.927538ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (77.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-844432 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
E0127 02:54:31.636665  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/functional-308251/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:54:48.569476  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/functional-308251/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-844432 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (1m17.72123316s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (77.72s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-844432 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c7cf914e-77e6-4aa9-b0ca-5d35e3e1bda1] Pending
helpers_test.go:344: "busybox" [c7cf914e-77e6-4aa9-b0ca-5d35e3e1bda1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c7cf914e-77e6-4aa9-b0ca-5d35e3e1bda1] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004219748s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-844432 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-844432 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-844432 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (91.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-844432 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-844432 --alsologtostderr -v=3: (1m31.020247549s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (91.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-844432 -n no-preload-844432
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-844432 -n no-preload-844432: exit status 7 (76.05203ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-844432 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (86.73s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-896179 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-896179 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (1m26.730005717s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (86.73s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-896179 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [862fd474-8ef2-4ff8-a329-89cae6532fe8] Pending
helpers_test.go:344: "busybox" [862fd474-8ef2-4ff8-a329-89cae6532fe8] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.003703693s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-896179 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-896179 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-896179 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-896179 --alsologtostderr -v=3
E0127 02:59:48.566724  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/functional-308251/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-896179 --alsologtostderr -v=3: (1m31.01924851s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-542356 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-542356 --alsologtostderr -v=3: (2.293149783s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-542356 -n old-k8s-version-542356
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-542356 -n old-k8s-version-542356: exit status 7 (68.323446ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-542356 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-896179 -n embed-certs-896179
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-896179 -n embed-certs-896179: exit status 7 (77.877569ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-896179 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (301.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-896179 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
E0127 03:01:33.566809  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-896179 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (5m1.735202981s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-896179 -n embed-certs-896179
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (301.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-150897 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
E0127 03:02:56.648757  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-150897 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (1m24.207982371s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-150897 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [dfe8a0f7-78d8-45d9-ab4d-384fe4f61b29] Pending
helpers_test.go:344: "busybox" [dfe8a0f7-78d8-45d9-ab4d-384fe4f61b29] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [dfe8a0f7-78d8-45d9-ab4d-384fe4f61b29] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004489447s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-150897 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-150897 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-150897 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-150897 --alsologtostderr -v=3
E0127 03:04:48.567414  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/functional-308251/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-150897 --alsologtostderr -v=3: (1m31.048369118s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-150897 -n default-k8s-diff-port-150897
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-150897 -n default-k8s-diff-port-150897: exit status 7 (79.269501ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-150897 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-ssvc4" [5bee0cf3-5350-4a26-8398-2f90696f8ccc] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003188516s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-ssvc4" [5bee0cf3-5350-4a26-8398-2f90696f8ccc] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004621633s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-896179 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-896179 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.69s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-896179 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-896179 -n embed-certs-896179
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-896179 -n embed-certs-896179: exit status 2 (264.194746ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-896179 -n embed-certs-896179
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-896179 -n embed-certs-896179: exit status 2 (258.282774ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-896179 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-896179 -n embed-certs-896179
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-896179 -n embed-certs-896179
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.69s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (44.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-446781 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
E0127 03:06:33.566878  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/addons-903003/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-446781 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (44.090248593s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (44.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-446781 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-446781 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.147013184s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-446781 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-446781 --alsologtostderr -v=3: (7.320088772s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-446781 -n newest-cni-446781
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-446781 -n newest-cni-446781: exit status 7 (78.228725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-446781 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (36.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-446781 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-446781 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (36.47661237s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-446781 -n newest-cni-446781
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (36.79s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-446781 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.62s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-446781 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-446781 -n newest-cni-446781
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-446781 -n newest-cni-446781: exit status 2 (237.87705ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-446781 -n newest-cni-446781
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-446781 -n newest-cni-446781: exit status 2 (242.550406ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-446781 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-446781 -n newest-cni-446781
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-446781 -n newest-cni-446781
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (56.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-284111 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-284111 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (56.857695766s)
--- PASS: TestNetworkPlugins/group/auto/Start (56.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-284111 "pgrep -a kubelet"
I0127 03:08:51.582847  904889 config.go:182] Loaded profile config "auto-284111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-284111 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-k75f7" [aec014cf-73e2-4417-9566-eae90ff88b33] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-k75f7" [aec014cf-73e2-4417-9566-eae90ff88b33] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.0045048s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (16.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-284111 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context auto-284111 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.128667864s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0127 03:09:17.935354  904889 retry.go:31] will retry after 730.947155ms: exit status 1
net_test.go:175: (dbg) Run:  kubectl --context auto-284111 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (16.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-284111 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-284111 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (64.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-284111 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-284111 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m4.112230508s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (64.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-682lz" [5cc35d08-2f34-43eb-b409-970dd71aa8c1] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004730665s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-284111 "pgrep -a kubelet"
I0127 03:10:44.531279  904889 config.go:182] Loaded profile config "kindnet-284111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-284111 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-lj9jn" [59cef13c-a93f-45bb-8fcc-860e7cc3c25a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-lj9jn" [59cef13c-a93f-45bb-8fcc-860e7cc3c25a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004757041s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-284111 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-284111 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-284111 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-284111 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-284111 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m19.001085211s)
--- PASS: TestNetworkPlugins/group/calico/Start (79.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-z8vrn" [6b35b382-ce4c-4f1c-ade9-837fc3e21aa4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004949516s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-284111 "pgrep -a kubelet"
I0127 03:12:34.610132  904889 config.go:182] Loaded profile config "calico-284111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-284111 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-xsn5k" [a39931ed-df0d-47a5-ace4-7829008c8f72] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-xsn5k" [a39931ed-df0d-47a5-ace4-7829008c8f72] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.003620213s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-284111 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-284111 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-284111 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (71.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-284111 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-284111 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m11.852045872s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (71.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-284111 "pgrep -a kubelet"
I0127 03:14:13.984101  904889 config.go:182] Loaded profile config "custom-flannel-284111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-284111 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-m8fq8" [fd74201e-eacf-4cdf-8461-f4ff779f254d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-m8fq8" [fd74201e-eacf-4cdf-8461-f4ff779f254d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.00391335s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-284111 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-284111 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-284111 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (60.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-284111 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-284111 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m0.687366038s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (60.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-284111 "pgrep -a kubelet"
I0127 03:15:42.296417  904889 config.go:182] Loaded profile config "enable-default-cni-284111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-284111 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-pd9vq" [4d46626c-7afe-48ca-a4df-dcf30983df27] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-pd9vq" [4d46626c-7afe-48ca-a4df-dcf30983df27] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003933428s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-284111 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-284111 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-284111 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (75.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-284111 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-284111 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m15.548609875s)
--- PASS: TestNetworkPlugins/group/flannel/Start (75.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-2pqcj" [587dad2f-70cb-433b-b648-6b402e58bf4c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003748367s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-284111 "pgrep -a kubelet"
I0127 03:17:29.878849  904889 config.go:182] Loaded profile config "flannel-284111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-284111 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-7lshd" [05378c2f-e00d-4e48-851e-5550db1cfcf1] Pending
E0127 03:17:30.927491  904889 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-897624/.minikube/profiles/calico-284111/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-7lshd" [05378c2f-e00d-4e48-851e-5550db1cfcf1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-7lshd" [05378c2f-e00d-4e48-851e-5550db1cfcf1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004333454s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-284111 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-284111 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-284111 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (58.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-284111 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-284111 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (58.704889029s)
--- PASS: TestNetworkPlugins/group/bridge/Start (58.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-284111 "pgrep -a kubelet"
I0127 03:18:56.880673  904889 config.go:182] Loaded profile config "bridge-284111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-284111 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-z7fk2" [824e8ca0-10ee-4abc-a6c4-04072c01606d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-z7fk2" [824e8ca0-10ee-4abc-a6c4-04072c01606d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004215714s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-284111 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-284111 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-284111 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (39/312)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.32.1/cached-images 0
15 TestDownloadOnly/v1.32.1/binaries 0
16 TestDownloadOnly/v1.32.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.32
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
126 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
127 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
128 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
129 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
132 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestGvisorAddon 0
178 TestImageBuild 0
205 TestKicCustomNetwork 0
206 TestKicExistingNetwork 0
207 TestKicCustomSubnet 0
208 TestKicStaticIP 0
240 TestChangeNoneUser 0
243 TestScheduledStopWindows 0
245 TestSkaffold 0
247 TestInsufficientStorage 0
251 TestMissingContainerUpgrade 0
259 TestStartStop/group/disable-driver-mounts 0.17
276 TestNetworkPlugins/group/kubenet 3.26
284 TestNetworkPlugins/group/cilium 6.02
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.32s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-903003 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.32s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-113637" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-113637
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-284111 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-284111

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-284111

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-284111

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-284111

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-284111

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-284111

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-284111

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-284111

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-284111

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-284111

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284111"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284111"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284111"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-284111

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284111"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284111"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-284111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-284111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-284111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-284111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-284111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-284111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-284111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-284111" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284111"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284111"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284111"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284111"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284111"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-284111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-284111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-284111" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284111"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284111"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284111"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284111"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284111"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-284111

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284111"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284111"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284111"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284111"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284111"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284111"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284111"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284111"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284111"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284111"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284111"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284111"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284111"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284111"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284111"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284111"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284111"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284111"

                                                
                                                
----------------------- debugLogs end: kubenet-284111 [took: 3.08957405s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-284111" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-284111
--- SKIP: TestNetworkPlugins/group/kubenet (3.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-284111 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-284111

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-284111

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-284111

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-284111

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-284111

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-284111

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-284111

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-284111

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-284111

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-284111

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284111"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284111"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284111"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-284111

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284111"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284111"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-284111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-284111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-284111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-284111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-284111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-284111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-284111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-284111" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284111"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284111"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284111"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284111"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284111"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-284111

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-284111

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-284111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-284111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-284111

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-284111

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-284111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-284111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-284111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-284111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-284111" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284111"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284111"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284111"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284111"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284111"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-284111

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284111"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284111"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284111"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284111"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284111"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284111"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284111"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284111"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284111"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284111"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284111"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284111"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284111"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284111"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284111"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284111"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284111"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-284111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284111"

                                                
                                                
----------------------- debugLogs end: cilium-284111 [took: 5.856406377s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-284111" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-284111
--- SKIP: TestNetworkPlugins/group/cilium (6.02s)

                                                
                                    
Copied to clipboard